An AI-generated milestone in New Hampshire?
An “unlawful” robocall and the future of AI election threats
On Monday, the New Hampshire’s Office of the Attorney General warned voters to disregard “an unlawful” robocall made in President Biden’s voice telling New Hampshire voters not to vote in the state’s Tuesday presidential primary. According to the Attorney General’s Office, “Although the voice in the robocall sounds like the voice of President Biden, this message appears to be artificially generated based on initial indications.”
The “Biden” robocall appears to be the latest in a list of examples of AI-generated content used in this election cycle. The lead up to the 2024 election has already become a testing ground for rapidly-advancing generative AI technology as demonstrated by its use by political candidates, parties, and super PACs alike.
While the robocall marks a milestone in how AI-generated content can be used to try to confuse voters and suppress turnout in the final days and hours before an election, it epitomizes how rapid advancements in generative AI are poised to escalate existing threats to the 2024 cycle, but are unlikely to introduce qualitatively new threats.
Last-minute misleading communications about voting processes have been a recurrent problem in US elections. Now, AI-generated content has the potential to make those efforts more convincing.
So, what can and should be done to mitigate the AI-amplified threat of voter suppression in the final moments before an election? While there is no silver bullet, we must look to both familiar and new safeguards in 2024 and beyond.
To name just a few examples:
Federal and state law enforcement should act quickly to investigate and, where appropriate, pursue charges against the perpetrators of fraudulent election-related communications. New Hampshire’s Office of the Attorney General’s Office announced yesterday the state’s Election Law Unit is investigating the robocall. Depending on the content and context of the messages, federal civil rights statutes, including prohibitions on interference with the right to vote and voter intimidation, federal criminal law, and state-law prohibitions on fraud and impersonation (where available) may be the basis for criminal charges and civil suits.
Second, election officials and their civil society partners should place renewed emphasis on prebunking to proactively equip voters with accurate voting information. We may not have the time needed to counter a late-hour election disinformation campaign. And so the best defense is a good offense: By ensuring voters, especially in communities who have been repeat targets of voter suppression, have a steady drumbeat from trusted sources, we can anticipate and mitigate common suppression and disinformation tactics.
Third, we need new laws. Policymakers should respond to both Republican and Democratic calls for reform to pass legislation that addresses AI-generated content’s use in election communications while respecting the First Amendment’s protection of political speech. Policies that require the disclosure of synthetic content in election communications — like the REAL Political Ads Act — are low-hanging fruit. And disclosure is particularly important in the case of synthetic audio and video — content formats which voters have historically assumed to be authentic, even when they might have questioned similar messages in writing.
Big picture, American voters should recognize the “Biden” robocall is the clearest signal yet that our information ecosystem has entered a new era. This new era requires voters to navigate long-standing threats such as voter suppression dressed up in the more realistic and convincing guise of synthetic content. In this case, both media coverage and official action was quick and comprehensive — and a reminder that as voters we should rely on voting guidance gleaned directly from trusted, authoritative sources, like local and state election officials.