A new joint study from the Oxford Internet Institute (OII), University of Oxford and AI Security Institute (AISI) offers unprecedented insights into how conversational AI can exert influence over people’s political beliefs and what makes them effective.
The paper, The Levers of Political Persuasion with Conversational AI, was authored by a team from OII, the UK AI Security Institute, the LSE, Stanford University and MIT, and published in Science. It examines how large language models (LLMs) influence political attitudes through conversation.
Drawing on nearly 77,000 UK participants and 91,000 AI dialogues, the study provides the most comprehensive evidence to date on the mechanisms of AI persuasion and their implications for democracy and AI governance.
“Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues,” said lead author Kobi Hackenburg, DPhil candidate at the OII and Research Scientist at AISI. “We show that very small, widely-available models can be fine-tuned to be as persuasive as massive, proprietary AI systems.”
“This paper represents a comprehensive analysis of the various ways in which LLMs are likely to be used for political persuasion. We really need research like this to understand the real-world effects of LLMs on democratic processes,” said co-author and OII Professor Helen Margetts.
Key findings
- Model size isn’t the main driver of persuasion
A common fear is that as computing resources grow and models scale, LLMs will become increasingly adept at persuasion, concentrating influence among a few powerful actors. However, the study found that model size plays only a modest role.
- Fine-tuning and prompting matter more than scale
Targeted post-training, including supervised fine-tuning and reward modelling, can increase persuasiveness by up to 51%, while specific prompting strategies can boost persuasion by up to 27%. These techniques mean even modest, open-source models could be transformed into highly persuasive agents.
- Information density drives persuasiveness
The most persuasive AI systems were those that produced information-dense arguments – responses filled with fact-checkable claims potentially relevant to the argument. Roughly half of the explainable variation in persuasion across models could be traced to this factor alone.
- Persuasion comes at a cost to accuracy
The study reveals a troubling trade-off: the more persuasive a model is, the less accurate its information tends to be. This suggests that optimising AI systems for persuasion could undermine truthfulness, posing serious challenges for public trust and information integrity.
- AI conversation outperforms static messaging
Conversational AI was found to be significantly more persuasive than one-way, static messages, highlighting a potential shift in how influence may operate online in the years ahead.
The authors note that, while persuasive in controlled settings, real-world impacts may be constrained by users’ willingness to engage in sustained, effortful conversations on political topics.
Read the full paper: The Levers of Political Persuasion with Conversational AI, published in the journal, Science, by the American Association for the Advancement of Science. Lead authors, Kobi Hackenburg and Ben M. Tappin. Co-authors, Luck Hewitt, Ed Saunders, Sid Black, Hause Lin, Catherine Fist, Helen Margetts, David G. Rand and Christopher Summerfield.
Notes for Editors
About the research
The researchers ran three large-scale survey experiments involving 76,977 UK adults. Each participant engaged in dialogue with one of 19 open- and closed-source LLMs, including frontier systems like GPT-4.5, GPT-4o, and Grok-3-beta. Each model was instructed to persuade users on one of 707 politically balanced issues. Professional fact-checkers evaluated nearly half a million claims across 91,000 conversations. This work created a dataset unmatched in prior research.
Funding information
This research was funded by the UK Government’s Department of Science, Innovation and Technology.
Acknowledgements
The authors acknowledge the use of resources provided by the Isambard-AI National AI Research Resource (AIRR). Isambard-AI is operated by the University of Bristol and is funded by the UK Government’s Department for Science, Innovation and Technology (DSIT) via UK Research and Innovation; and the Science and Technology Facilities Council [ST/AIRR/I-A-I/1023].
About the Oxford Internet Institute (OII)
The Oxford Internet Institute (OII) has been at the forefront of exploring the human impact of emerging technologies for 25 years. As a multidisciplinary research and teaching department, we bring together scholars and students from diverse fields to examine the opportunities and challenges posed by transformative innovations such as artificial intelligence, large language models, machine learning, digital platforms, and autonomous agents.
About the University of Oxford
Oxford University has been placed number one in the Times Higher Education World University Rankings for the tenth year running in 2025. At the heart of this success are the twin-pillars of our ground-breaking research and innovation and our distinctive educational offer. Oxford is world-famous for research and teaching excellence and home to some of the most talented people from across the globe.
Contact
For more information or to arrange interviews, please contact: Sara Spinks / Veena McCoole, Media and Communications Manager.
T: +44 (0)1865 287237
M: +44 (0)7551 345493
E: press@oii.ox.ac.uk