The First AI Election?

If you read the news then you’ve noticed the many, many articles these past few months warning that the first AI election in the United States will be an untamable swirl of disinformation that threatens our democracy. The narrative, which a Biden deepfake in the New Hampshire primary strengthened, has captured the minds of numerous policymakers and policy advocates who are pushing legislation across the country to restrict campaign deepfakes and urging regulators to consider new rules around the use of AI in elections.

So are robots replacing the Russian troll factories in spawning out-of-control deepfakes, and will that really shift the outcome of the election? Misinformation, whether driven by bots or not, has become a fixture of campaign cycles for some time. And AI could actually be a boon to campaigns that would otherwise struggle against well-funded opponents. That reality won’t stop the rush to regulate, or the media’s fixation on any instance confirming their bias. Here’s what public affairs professionals preparing to navigate their first AI election need to know.

Lawmakers Rush to Regulate

Whether AI-fueled disinformation is a game-changer or not, many policymakers are hoping they never have to find out. State lawmakers are rushing to regulate the use of deepfakes in elections, with 11 states already enacting laws and another 27 considering legislation. Yet this issue isn’t new: it follows efforts in several states intended to combat election disinformation, starting in California in 2018 and gaining steam after the 2020 presidential race. Federal lawmakers, meanwhile, have introduced their own proposals to limit deceptive deepfakes, and policymakers and activists alike want the FEC to update their regulations to cover deceptive AI uses. Local election officials, meanwhile, are asking Congress for resources to help them combat AI threats.

Subscribe to Receive Insights

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Will New Deepfake Rules Matter?

The specter of misinformation and disinformation policymakers are racing to contain has haunted us for a number of election cycles. Remember Pope Francis’ alleged endorsement of Donald Trump in 2016? The sprawling pizzagate conspiracy? In 2016, the average American saw one or more fake stories on social media leading up to the election. Indeed, fake photos and videos have long been possible without today’s AI technologies, and simple tools like Photoshop have been effective in duping voters in the past.

Of course, generative AI makes fake content even easier to spin up at scale—especially in the hands of bad actors. But an increase in the supply won’t necessarily change the demand-side economics of election news, like how much election-related content voters consume, how much they care about that content, or how willing they are to be persuaded by anything outside of their partisan bubble. Plus, the hype around the AI threat this year has spurred tech giants to promise more vigilance in their policing of fake content. In other words, fake content may be everywhere this election cycle, but will its impact on election outcomes be noticeably more significant than fake information of years past?

Even if the intensity and scale of disinformation increases due to AI, there’s another side to the equation. Generative AI tools can be a force multiplier for the very people who are supposed to influence the outcome of an election: campaigns themselves.

AI Tech Can Be a Boost for Campaigns

Campaigns run on three things: people, money, and time. But the old campaign adage—that you can always find more people and money, but never more time—may be upturned this year by generative AI. The new tools available this cycle can supercharge much of the monotonous work that campaigns undertake, meaning they’ll need fewer people and fewer dollars to be competitive (our CEO unpacked the possibilities of generative AI in campaigns in an interview with University of Virginia’s Center for Politics last year). This AI boost comes just in time as campaigns struggle to hire and fundraise in this post-covid, politically burnt-out environment.

Generative AI can also help level the playing field among candidates, giving those with fewer resources the ability to create more content, connect with more voters, and better predict voter preferences even without a deep donor rolodex. Of course, for AI to be an equalizer of political opportunities and voices, it must be made available for candidates to use.

Yet if companies issue blanket bans on political use cases, as OpenAI did in their recently updated usage policies, those candidates who most benefit will be incumbents who have traditional financial resources at their disposal. The underdogs will lose out on AI’s force-amplifying technology, while the bad actors will simply turn to jailbroken models or download uncensored, publicly available models that can be trained to do their dirty work. It’s clear technology companies want to avoid getting dragged into another conundrum around liability and content moderation, but the answer to the AI accountability challenge can’t be to run away entirely from transparent, legitimate use cases by campaigns and other responsible actors.

Companies Stuck in the Middle

OpenAI isn’t the only AI company facing a tough decision this election season. Every company building AI models, deploying AI tools, or surfacing AI content to voters is squeezed between the demands of concerned policymakers and stakeholders on one side and those who hope to use AI in campaigns on the other—whether they be bad actors or honest ones. The AI obsession of this election cycle will bring greater scrutiny than ever before to these companies as lawmakers, stakeholders, and the public expect them to police gen-AI content. Expect large tech companies to continue trying to get ahead of the problem and for lawmakers to continue dropping proposals that would require industry to take action against AI misinformation.

Nor are AI companies alone—companies across every industry will need a plan to manage and address misinformation and disinformation as the election heats up and even after the votes come in. Whether your company or client is building an AI product that could be used to influence elections, tracking campaign tech policy issues that impact the bottom line, or using AI tools that could face new election-related rules, you’ll need a plan to survive both the hype and reality of AI’s impact on the campaign season this year.

Public affairs professionals can’t afford a wait-and-see strategy when it comes to the first AI election and how it could shape the policy landscape. Instead, they are deploying a smart playbook to ensure they stay ahead of the curve. At Delve, we can help you separate the policy signal from the campaign noise, whether it is coming from bots or Biden (or Trump or any of the other candidates up and down the ballot).