Modern elections no longer unfold only at polling stations. They unfold online, in an information battlefield where rumors, conspiracy theories, and fabricated narratives can spread faster than official results. The rise of generative artificial intelligence has intensified fears that elections could be overwhelmed by synthetic propaganda. Yet emerging research suggests that the same technology capable of amplifying misinformation may also help neutralize it.

In recent academic study titled “Prebunking Election Rumors: Artificial Intelligence Assisted Interventions Increase Confidence in American Elections” , researchers from the California Institute of Technology, Washington University in St. Louis, and Cambridge University propose a promising defensive strategy: using AI to inoculate voters against false narratives before they spread.

The concept behind the research is known as prebunking. Unlike traditional fact-checking, which responds to misinformation after it circulates, prebunking aims to warn people about misleading claims in advance. The logic is borrowed from medical immunology. Just as vaccines expose the immune system to a weakened version of a virus, prebunking exposes people to the tactics used in misinformation campaigns so they can recognize them later.

Large language models make this strategy scalable. AI systems can generate concise explanations that anticipate and debunk common election myths, such as claims about hacked voting machines, manipulated vote counts, or fraudulent mail ballots. Because these messages can be produced rapidly and adapted to new rumors, AI allows election authorities or civil society groups to respond at the speed of the information environment.

The researchers tested this approach through a large survey experiment involving 4,293 registered U.S. voters. Participants were shown AI-generated prebunking messages explaining why certain election fraud narratives are misleading and how such claims typically spread. The results were striking. People who received these prebunking messages were significantly less likely to believe election misinformation and reported greater confidence in the integrity of elections.

Importantly, the effect persisted for at least a week after exposure and appeared across partisan lines. The intervention did not trigger the kind of ideological backlash that sometimes accompanies direct fact-checking. In other words, preparing voters for misinformation proved more effective than correcting them after the fact.

Even more notable was the role of AI itself. The study found that fully AI-generated prebunking messages performed about as well as those that were reviewed or edited by humans. This finding matters because one of the greatest challenges in combating misinformation is speed. False claims can spread across social media within hours, while traditional fact-checking often takes days.

AI-generated prebunking messages can be produced almost instantly and distributed widely, allowing election officials and watchdog groups to get ahead of emerging rumors.

The implications extend beyond this single experiment. Generative AI is increasingly understood as a dual-use technology in the information ecosystem. On one hand, it can enable large-scale propaganda, automated influence campaigns, and deepfake political content. On the other, it can power new tools for misinformation detection, rumor monitoring, and rapid-response public education.

For policymakers and election administrators, the lesson is clear. Protecting democratic legitimacy may require shifting from reactive to preventive strategies. Instead of waiting for misinformation to explode across social media, institutions could proactively educate voters about the narratives they are likely to encounter.

In that sense, the most promising role for AI in elections may not be as a referee correcting falsehoods after the fact. It may be as a kind of informational vaccine, strengthening the public’s resistance to manipulation before the next viral rumor begins.

How Prebunking with AI Can Inoculate Elections from Disinformation