Here’s What You Need To Know
In the 1990s, American regulators chose to step back and allow the emerging “Information Superhighway” to innovate in an open, competitive playing field free from regulatory pressures. As we enter the new age of generative artificial intelligence (AI), policymakers around the world appear far more eager to regulate this disruptive new technology and the companies bringing it to digital life.
Instead of waiting to see how generative AI evolves and society adopts it, governments are rushing ahead to create a patchwork of regulations that will significantly impact any business hoping to build or utilize AI. While it is not likely to slow policymakers’ regulatory zeal, their solutions will almost certainly fall to Amara’s Law (futurist Roy Amara’s caution that “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run”). Here’s what you need to know to help your organization shape the emerging debate with an information advantage.
What AI Is … And Isn’t
What makes generative AI unique is its ability to create sophisticated content from basic prompts provided by humans. Yet while the memos, essays, ideas, and images spit out by AI chatbots may make the tools powerful “copilots” in the workplace and beyond, AI is still far from ready to replace human workers. That’s because the large language model (LLM) under the hood of tools like ChatGPT is simply a powerful recognizer of patterns, predicting the next word in a sentence or pixel in a picture based on patterns it learned from being trained on massive amounts of human-produced data.
Subscribe to Receive Insights
"*" indicates required fields
Chatbots are only as good as the data they’re trained on, so while they may be (mostly) know-it-alls with the ability to quickly reproduce what seem like facts, they can’t yet replace the critical thinking, creativity, judgement, and multi-step task completion offered by humans. Indeed, one of the emerging in-demand jobs of the AI age is a human prompt engineer.
Policymakers Aren’t Waiting For Prompts
Even though most businesses are only in the first stages of adopting AI, elected officials and regulators across numerous geographies and all levels of government are ready to act. For businesses planning to win in the AI era, staying ahead of these efforts is critical.
AI regulation was already in the works. Even before ChatGPT arrived in late 2022, officials had AI in their sights. The European Union had planned to regulate certain uses of AI like social scoring and “some instances of facial recognition.” China had implemented rules around recommendation algorithms and deep fakes. In the United States, the Biden Administration released an AI Bill of Rights blueprint and, in 2022, nine AI-related bills passed at the federal level and 21 at the state level.
ChatGPT scrambles policymakers’ best laid plans. ChatGPT triggered a generative AI arms race, and regulators are rushing to catch up. Rulemaking efforts that may have taken months or years to hash out before are now being considered in days and weeks. In just 11 days, regulators in the EU revamped their AI Act to address copyright concerns over ChatGPT. Italy banned OpenAI and gave it just 20 days to address issues around user privacy and the protection of minors (OpenAI complied). China released draft regulations just months after their last round of AI rules went into effect, requiring chatbots to follow “socialist core values.” U.S. Senate Majority Leader Chuck Schumer is circulating a “broad framework” for regulating AI.
Expect a patchwork of regulation. It’s not only Washington, Beijing, and Brussels rushing to place guardrails around generative AI. State and local governments are moving ahead with their own efforts. When it comes to federal-level rules, it’s not clear if Washington will be able to muster a coherent, unified plan. The Commerce Department is in the early stages of the rulemaking process, but other agencies are jumping into the fray – with several regulators insisting they have existing authority to police an AI-influenced world. The CFPB, Justice Department, EEOC, and FTC recently asserted their intent to fight AI bias with laws that are already on the books. Existing laws like California’s 2018 legislation that boosted transparency around online bots could be used by citizens to sue over undisclosed AI communication.
Policy Advocates Are Already Generating Their Response
Outside voices want to shape government’s response. Policymakers aren’t waiting to join the game, and neither are the stakeholders and advocates on all sides of the AI playing field. On one side are those who see artificial intelligence as an existential threat to humanity. On the other are AI developers themselves, who of course want their research to continue. In the middle are any number of stakeholders with interests in how AI is controlled and regulated, including activists concerned with AI’s impact on everything from fairness to climate.
Companies are trying to get ahead of concerns. Businesses in the AI space see what’s coming. TikTok is making a tool that would allow its creators “to disclose they used generative artificial intelligence” for their content. Apple has already delayed approval of an AI-powered app over concerns about “inappropriate content for children.” AI developers themselves have supported the idea of regulation.
The emerging battlelines will touch a wide range of industry sectors and segments of society. Because the impact of generative AI will touch so many aspects of work and life, regulatory efforts will cover equally wide ground. Expect major battles on algorithmic bias, intellectual property, crime, data privacy, tort liability, product marketing claims, and so much more. Meanwhile, businesses and others will have to navigate new reputational risks like deepfake scams and public backlash over using AI.
Get Smart To Stay Ahead
The AI era is here, and it’s moving faster than you can imagine. As generative AI’s feats grow more impressive by the day, the pressure for governments to regulate this new technology will only grow. Public affairs professionals in the technology sector can’t afford to tune out the conversation. Understanding the full range of stakeholders and how they will attempt to shape the emerging policy and regulatory frameworks – not to mention the public’s perception and understanding of what AI is (and isn’t) – will be crucial to ensuring your organization’s voice is heard and interests protected. With so many jurisdictions rushing to act, staying ahead may prove daunting. If Delve can help you build your information advantage in the AI debate, reach out to start a conversation (no chatbots involved).