In their latest blog, Dr Felix Simon, Research Associate, Oxford Internet Institute and Dr Sacha Atlay, Postdoctoral Researcher, University of Zurich explore the consequences of a skewed discourse around generative AI and elections.
The increasing public availability of generative artificial intelligence (GenAI) systems, such as OpenAI’s ChatGPT, Google’s Gemini, and a slew of others has led to a resurgence of concerns about the impact of AI and GenAI in public discourse. A recurrent theme is the impact of AI on national elections. Initial predictions warned that GenAI would propel the world toward a “tech-enabled Armageddon” (Scott, 2023), where “elections get screwed up” (Verma & Zakrzewski, 2024), and that “anybody who’s not worried [was] not paying attention” (Aspen Digital, 2024).
In a new paper for the Knight First Amendement Institute we argue that a lot of these alarmist predictions ignore a wealth of evidence. Mass persuasion, regardless of the tools employed, is inherently challenging. Emerging evidence demonstrates that the effectiveness of AI-driven microtargeting in political campaigns is limited. Meanwhile, a complex interplay of socioeconomic, cultural, and personal factors shape voting behaviour, and provide a bulwark against the effects of generative AI.
In this excerpt, we want to take a closer look at some of the consequences of the alarmist discourse around elections itself – because not just generative AI carries risks. The way we talk about the technology, isn’t risk-free either. And this matters with respect to elections.
The focus on misuses of GenAI in elections can divert us from other harms
A first point is that by overemphasizing the risks of GenAI in the context of elections, we risk overlooking the broader, more insidious ways in which GenAI is misused, such as enabling targeted harassment and amplifying harmful biases. These include the harassment of women and minorities. The creation and distribution of AI-generated fake nudes, mostly targeted at females, is a form of gendered violence that seeks to silence women in public life (Murgia, 2024) and can be used to humiliate, discredit, and threaten women, which may have a chilling effect on their participation in politics. Similarly, minorities are targeted by AI-assisted harassment campaigns, including racially biased or xenophobic attacks that are amplified through social media. These targeted campaigns undermine efforts to build inclusive political spaces.
An overly alarmist focus on GenAI also risks obscuring equally critical, long-standing threats to electoral integrity. A wealth of research in political science shows that free and fair elections depend on a complex set of structural conditions and procedural safeguards (Alihodžić et al., 2024; Norris, 2015). Any violation of these likely carries greater risks than any that generative AI can currently bring about. This starts with unfit or unfair electoral systems that fail to ensure broad representation, transparency, and accountability. The poor regulation of campaign finance laws can skew the competition between political actors and give some of them an unfair advantage (Falguera et al., 2014). Fair elections also require that electoral management bodies remain independent, impartial, and transparent; where design flaws persist or where these bodies are deliberately undermined, hollowed out, or abolished, electoral disputes can escalate and erode trust in elections (Elklit & Reynolds, 2005). Mechanisms for adjudicating electoral disputes are likewise important, as slow or biased processes can undermine public confidence in the outcomes of elections (Kelley, 2012). At an operational level, logistical shortcomings and failures such as mishandled voter registrations, ballot (re-)counting, or complex and unfair special voting arrangements can be problematic if they unfairly put some voters at a disadvantage. Then there are attempts to limit ballot access—through restrictive voter ID laws, systematic purges of voter rolls, gerrymandering, and intimidation of voters—and the manipulation of electoral governance mechanisms, which undermines elections in ways that likely outweigh the existing as well as the as-yet-unrealized risks posed by GenAI (Bermeo, 2016; Ginsburg & Huq, 2018; Norris, 2015).
Another pressing concern for both broader democracy and the integrity of elections lies in the systematic curtailment or weakening of press freedom and freedom of expression through various avenues. Legal and extralegal measures—from harsh libel laws to forms of media capture or outright violence or threats against journalists—can silence critical reporting and stifle dissenting voices, undermining one accountability mechanism in democracies. It can also lead to a poorer representation of the diverse elements of society and a worse voicing of grievances in society, thus impeding citizens’ ability to make better-informed electoral choices.
Narratives about the outsized effect of AI on elections could lead to suboptimal policy responses
An overemphasis on AI as a threat to elections may also lead to some suboptimal or even counterproductive policy responses. Excessive or overly broad regulations could not only be ineffective—because they end up targeting the wrong thing without doing much to improve actual threats to elections and democracy—they could also backfire. In an attempt to curb, for example, AI-driven misinformation, manipulation campaigns, and the like, governments might implement sweeping measures that (inadvertently) limit free speech or restrict access to information. This could create a chilling effect, where legitimate political discourse and dissent are stifled, thereby undermining democratic principles (Center for News, Technology & Innovation, 2024a, 2024b). It also sits uneasily with the fact that governments themselves are using AI for propaganda campaigns (with, for example, the U.S. Department of Defense in the past exploring the creation of fake online personas; Biddle, 2024).
The alarmist discourse on the effect of AI on elections could reduce trust and satisfaction with democracy
Finally, the narrative that generative AI poses an outsized threat to elections may, in itself, contribute to a decline in public trust and confidence in democratic institutions (Jungherr & Rauchfleisch, 2024). The perception, partially co-created by media coverage, that AI has significant effects on elections could diminish trust in democratic processes and weaken the public’s acceptance of election results. A recent study on concerns around the use of AI during the 2024 U.S. presidential election and public perceptions of AI-driven misinformation found that four out of five respondents expressed some level of worry about AI’s role in election misinformation, with higher consumption of AI-related news linked to heightened concerns about AI’s role in election information (Yan et al., 2025).
When media and public discourse emphasize the risks posed by generative AI – sometimes with limited actual evidence –they may create a sense of inevitability that election outcomes are manipulated or influenced by AI, eroding trust in electoral systems and processes. Even if AI does not play a major role in a specific election, the mere perception that it could have corrupted the process could lead voters to doubt the legitimacy of the results. This perception could be particularly harmful in tightly contested elections, where public trust in the outcome is essential to maintaining political stability. The same could also lead to a decrease in voter engagement and satisfaction with democracy. If voters believe AI is undermining the integrity of elections, they may become more disillusioned and disengaged from the political process in general. Why participate in an election, after all, if the outcome is rigged?
Relatedly, a risk in this context is what has been called the third-person effect and declining trust in one’s fellow citizens: AI-generated content may shape people’s perceptions and attitudes not because they themselves are influenced, but because they believe others are. This argument has been made—and empirically tested—regarding other forms of influence, such as disinformation campaigns, propaganda, and misinformation (Huang & Cruz, 2021). A U.S. study on how the general US public perceives and reacts to ChatGPT found that “individuals tend to believe they would personally benefit from the positive influence of ChatGPT, while others will benefit relatively less” and that they would be “more capable of using ChatGPT critically, ethically, and efficiently than others” (Chung et al., 2025), hinting at the possibility that the third-person effect may extend to AI, too.
A final point needs to be made here. There is good to be found in being vigilant about the risks posed by GenAI, collectively adapting to the novel challenges the technology brings, holding the companies developing these technologies—often with little oversight and accountability—accountable, and minimizing the harm that can be caused by the deployment and use of GenAI systems. It is entirely possible that the costs of overreacting to the risks posed by AI are on average lower than the costs of underreacting in the long run. But we should keep in mind the opportunity costs of focusing on AI rather than other causes of democratic dysfunction, and the policies that will follow from focusing on AI, how these policies may be instrumentalized, and the broader effects of alarmist narratives about AI.
This excerpt was adapted from the paper “Don’t Panic (Yet): Assessing the Evidence and Discourse Around Generative AI and Elections”, published by the Knight First Amendment Institute. The full version can be found here.
Felix M. Simon is the Research Fellow in AI and News at the Reuters Institute for the Study of Journalism and a Research Associate at the Oxford Internet Institute (OII) at the University of Oxford. His research looks at the implications of a changing news and information environment for democratic discourse and the functioning of democracy. He regularly writes and comments on technology, media, and politics for various international outlets, including as a monthly columnist for Nikkei, and advises media organizations and companies on AI. Simon holds a DPhil and MSc from the University of Oxford’s Internet Institute and a BA in Film and Media Studies from Goethe-University Frankfurt. He is affiliated with Columbia University’s Tow Center for Digital Journalism and the Center for Information, Technology, and Public Life (CITAP) at the University of North Carolina at Chapel Hill.
Sacha Altay is an experimental psychologist working on misinformation, trust, and social media in the Digital Democracy Lab at the University of Zurich. Through his research, he tries to understand why, despite people’s socio-cognitive abilities to resist misinformation, some false beliefs are widespread, and he tests novel communication techniques to inform people efficiently and correct common misperceptions about vaccines, GMOs, and nuclear energy. Altay completed his Ph.D. in cognitive science at the École Normale Supérieure in Paris and was previously a postdoctoral research fellow at the Reuters Institute for the Study of Journalism at the University of Oxford.
References
Scott, L. (2023, September 5). World faces “tech‐enabled Armageddon,” Maria Ressa says. Voice of America. https://www.voanews.com/a/world-faces-tech-enabled-armageddon-maria-ressa-says-/7256196.html
Verma, P., & Zakrzewski, C. (2024, April 23). AI deepfakes pose new challenges for politicians and elections. The Washington Post. https://www.washingtonpost.com/technology/2024/04/23/ai-deepfake-election-2024-us-india/
Aspen Digital. (2024, April 2). AI election threat is significant. AI Elections Initiative. https://aielections.aspendigital.org/blog/ai-election-threat/
Murgia, M. (2024). Code dependent: Living in the shadow of AI. Picador.
Alihodžić, S., Asplund, E., Bicu, I., & Wolf, P. (2024, September 10). Electoral risks: Guide on internal risk factors (3rd ed.). International Institute for Democracy and Electoral Assistance. https://doi.org/10.31752/idea.2024.40
Norris, P. (2015). Why elections fail. Cambridge University Press.
Falguera, E., Jones, S., & Ohman, M. (2014). Funding of political parties and election campaigns: A handbook on political finance. International IDEA.
Elklit, J., & Reynolds, A. (2005). Judging elections and election management quality by process. Representation, 41(3), 189–207.
Kelley, J. (2012). Monitoring democracy: When international election observation works, and why it often fails. Princeton University Press.
Bermeo, N. (2016). On democratic backsliding. Journal of Democracy, 27(1), 5–19.
Ginsburg, T., & Huq, A. (2018). How to save a constitutional democracy. The University of Chicago Press.
Center for News, Technology & Innovation. (2024, October 11). Synthetic media & deepfakes. https://innovating.news/article/synthetic-media-deepfakes/
Center for News, Technology & Innovation. (2024). Addressing disinformation. https://innovating.news/article/addressing-disinformation/
Biddle, S. (2024, October 17). Pentagon seeks to use deepfakes for online influence campaigns. The Intercept. https://theintercept.com/2024/10/17/pentagon-ai-deepfake-internet-users/
Jungherr, A., & Rauchfleisch, A. (2024). Negative downstream effects of alarmist disinformation discourse: Evidence from the United States. Political Behavior, 46, 2123–2143.
Yan, H. Y., Morrow, G., Yang, K.-C., & Wihbey, J. (2025, January 30). The origin of public concerns over AI supercharging misinformation in the 2024 U.S. presidential election. HKS Misinformation Review. https://misinforeview.hks.harvard.edu/article/the-origin-of-public-concerns-over-ai-supercharging-misinformation-in-the-2024-u-s-presidential-election/
Huang, H., & Cruz, N. (2022). Propaganda, presumed influence, and collective protest. Political Behavior, 44(4), 1789–1812. https://doi.org/10.1007/s11109-021-09683-0
Chung, M., Kim, N., Jones-Jang, S. M., Choi, J., & Lee, S. (2025). I see a double-edged sword: How self-other perceptual gaps predict public attitudes toward ChatGPT regulations and literacy interventions. New Media & Society. https://doi.org/10.1177/14614448241313180