This piece was co-published with the AI + Elections Clinic at Arizona State University. Follow them here.
What happens to elections if artificial intelligence manages to erase the boundary between fact and fiction? If it becomes ubiquitous in the operation of election administration systems? If it enables a new wave of cyberattacks?
Many of the “nightmare scenarios” for election threats now rightfully involve AI. Even as the most significant concerns have yet to come to pass, the potential for the technology to be used maliciously grows every election cycle.
At the same time, like all technological changes, the risks of AI are easily overstated. Most likely, artificial intelligence is not introducing new categories of election threats. Instead, it is supercharging long-standing risks. Plus, with appropriate guardrails and human review, there are positive and salutary uses of AI in the elections context (more on those at the end).
Based on work by Protect Democracy and the AI + Elections Clinic at Arizona State University, we believe AI most clearly amplifies three existing risks to U.S. election administration:
The malicious spread of false information about elections.
Intentional cyberattacks (and other attacks) on election infrastructure.
Inadvertent mistakes, accidents, and errors in critical systems.
Threat one: False information
Generative AI provides malicious actors another tool for pushing false election information to voters at scale.
Deepfakes are poised to play a growing role in influencing future elections as they become increasingly difficult to distinguish from human-generated content. Audio deepfakes, in particular, are already hard for humans to detect. Meanwhile, the launch of new video creation platforms in the last year has shown how AI-generated videos’ realism has rapidly improved and will only continue to do so over time. An early case from January 2024 was a deepfaked Joe Biden (“malarkey” and all) robocalling New Hampshire voters.
Read more: An AI-generated milestone in New Hampshire?
The 2024 election also saw coordinated Russian disinformation campaigns driven by deepfaked videos, and we have seen other troubling examples of deepfakes disrupting elections abroad. As AI makes it more difficult to tell what’s true and what’s false, bad actors will likely use every tool at their disposal to confuse and deceive voters.
Threat two: Attacks on election infrastructure
Election infrastructure (IT systems or databases, for example) has long been flagged as a potential target for bad actors, and that trend is likely to continue.
AI is already scaling up the sophistication of attacks on infrastructure that, before long, could include systems that keep electoral processes functioning. Just in November, Anthropic announced that it had detected what is potentially the first known cyberattack driven largely by autonomous “agentic” AI. Distributed denial-of-service (DDoS) and phishing attacks are additional areas where bad actors can use AI to improve their attacks on infrastructure, either by crashing those systems or accessing the sensitive data within.
In short, a future where AI systems are capable of targeting election infrastructure more effectively than humans is now plausible.
Threat three: Unreliability
AI is simply not always reliable, and elections are an area where even mistakes caused by good intentions can have severe and far-reaching consequences. By its nature, the technology is probabilistic — some degree of randomness is built into how any LLM functions. Elections, however, have no room for error.
Take chatbots powered by learning language models: They’re now part of daily life for a huge number of Americans, yet both research and actual use cases show us why election information generated by AI tools always needs to be taken with a grain of salt. The wide reach of these systems increases the damage that could be caused by any of them inadvertently providing users with false election information. Learning from these cases, as AI continues to be incorporated into election administration itself (more on that below), officials should similarly be careful to always assume that AI is fallible just like any other tool.
AI tools have become more accessible than ever before, both in terms of financial cost and ease of use. That democratization of AI technology means it’s easier to create widely seen deepfakes reaching millions of voters, technical attacks on election infrastructure, and well-intentioned yet faulty outputs that can have devastating effects. How election officials and the public respond to all the ways AI can amplify election disruptions will help determine how damaging those disruptions are.
AI’s benefits to election administration
But as the announcement of Protect Democracy’s AI and Democracy Action Lab noted a few weeks ago, AI has the power to both enhance our democracy and to erode it. Just as it is amplifying pre-existing threats to elections, AI can also potentially fortify effective election administration.
To the degree that AI can be used productively in an election context, it should likely be limited to:
Uses paired with human review before and after application: AI can add another set of “eyes” to an existing review process — but it must always be supported by careful human review at multiple points, including both before and after the AI’s usage.
Uses that only draw on publicly available data: Unless you are using a closed and controlled LLM or working with a secured government-grade LLM, such as Microsoft Copilot GCC or ChatGPT Gov, any election-related data entered into AI systems should come from publicly available sources, and any AI application should be completely firewalled from access to non-public data, like social security numbers.
Uses that leverage AI’s strengths, such as data synthesis: AI will add the most value for election administrators in processes that are time-consuming and challenging for humans, including tasks that require synthesizing many different sources of data or large amounts of data that humans cannot easily digest.
Even with careful selection of use cases, however, any use of AI retains some risk and election officials should carefully monitor outputs to minimize those risks.
Arizona State University’s AI + Elections Clinic has been working to identify transparent, safe, and effective AI use cases that election officials have already tested, while engaging with and finding ways to mitigate the risks that accompany any use of AI tools.
Two examples that meet the criteria described above:
First, AI holds promise for poll worker training. Poll workers are often temporary employees without election administration experience. Their trainers, on the other hand, are typically full-time election officials who live and breathe elections every day. It can be challenging for election officials to put themselves in the shoes of newcomers when designing poll worker training. Luckily, reframing content for a specific audience’s context and needs is something at which AI excels. Election officials can ask an AI assistant to adjust the language in their training materials to develop drafts tailored to poll workers’ level of experience, which officials then should carefully review to ensure their accuracy and nuance.
Read more about poll worker training materials and AI: Prompts in practice — Using AI to create poll worker training flashcards.
Second, AI could help predict turnout. Accurately predicting turnout for in-person voting can be a challenge for election officials. The number of people who show up at a poll site at a given time can be influenced by a huge range of factors like traffic, local events, and voter preferences.
Election officials must base their decisions around where to allocate staff, equipment, and other resources on the basics such as the number of registered voters in a precinct, and weighing this range of additional factors appropriately can be challenging. The consequences of inaccurate predictions are all too real: long lines of voters waiting to cast their ballots.
Given AI’s data analysis and pattern-recognizing capabilities, AI tools may be well-suited to support election officials in conducting the analysis to inform these predictions. Election officials can use AI to consider a wider range of specific, publicly available data sources to inform turnout forecasts. This could include reports from previous elections on turnout by precinct, reports of lines or other negative experiences in past elections, or announcements about construction or other nearby disruptions. To minimize risks, election officials should cross-check with the data sources they directed the AI to use and verify any findings.
Read more about election resource planning and AI: Using AI and simple math to leverage other tech and better predict election resource needs.
These examples differ from other possible applications of AI, such as with signature matching or voter list maintenance, where the repeated, granular decision-making involved makes checking AI outputs challenging and raises the risks of unseen errors leading to potentially catastrophic impact. They also focus on smaller yet critical background tasks in election officials’ workflows, rather than high-stakes public-facing processes like voter registration.
The AI + Elections Clinic will be holding a series of training sessions throughout 2026 to help election officials differentiate between these types of AI applications and implement AI tools responsibly, including in Los Angeles on February 12 and Phoenix on February 26.
We encourage any interested election officials to join. Email the Clinic here for inquiries about upcoming sessions.











LLM = "Large Language Model", not "learning language model"