Fears about the impact of Generative AI on misinformation are overblown, says Oxford AI researcher.
With the explosion of generative AI, many observers believe this could trigger the next misinformation nightmare. Yet new analysis released today argues that this isn’t necessarily the case, with, at best, modest effects of generative AI on the misinformation landscape.
Lead author Felix M Simon, doctoral researcher at the Oxford Internet Institute, part of the University of Oxford, argues that:
- Increasing the supply of misinformation does not necessarily mean people will consume more information
- The impact of improved misinformation quality might be negligible, given most individuals are minimally exposed to such content, and that the majority of existing misinformation works without the need for more realism. Increased quality might also be in conflict with other goals of misinformation producers, such as making content more accessible or authentic
- Information infrastructure enabling micro-targeting of users with misinformation remains largely unaffected by improvements in generative AI
- Evidence suggests that personalising misinformation through micro-targeting has limited persuasive effects on the majority of recipients as people don’t pay attention to the messages in the first place
- Assumptions that generative AI will create more personalised and persuasive content is so far unproven
In their paper, ‘Misinformation Reloaded? Fears about the Impact of Generative AI on Misinformation are Overblown’, authors Felix M. Simon, Oxford Internet Institute, University of Oxford; Dr Sacha Altay, University of Zurich and Hugo Mercier, Institut Jean Nicod; provide an assessment of the current thinking about the potential impact of generative AI on the spread of misinformation online, drawing on the latest evidence from communication studies, cognitive science, and political science.
More importantly, the authors also challenge the views of leading AI researchers that whilst generative AI will make it easier to create realistic but false and misleading content at scale, it will also lead to potentially catastrophic outcomes for peoples’ beliefs and behaviours, and the public arena of information and wider democracy.
Explains lead author Felix Simon, doctoral researcher, Oxford Internet Institute and Research Assistant, Reuters Institute for the Study of Journalism, University of Oxford:
“Excessive and speculative warnings about the ill effects of AI on the public arena and democracy, even if well-intentioned, can also have negative unintended consequences. These include reducing trust in factually accurate news and the institutions that produce it or overshadowing other problems posed by generative AI, like nonconsensual pornography disproportionately harming women”.
In their commentary, the authors recognise that the use and existence of generative AI doesn’t come without some risks, such as potentially making the work of journalists and fact-checkers more difficult, and say they aren’t seeking to dismiss the concerns around generative AI technology.
‘Time will tell whether alarmist headlines about generative AI will be warranted or not, but regardless of the outcome, the discussion of the impact of generative AI on misinformation would benefit from being more nuanced and evidence-based, especially against the backdrop of ongoing regulatory efforts’.
With the upcoming AI Safety Summit set to be hosted by the UK Government at Bletchley Park later on 1-2 November, Simon believes the current debate around how to regulate AI shouldn’t be driven by moral panics.
Concludes Simon:
‘Our aim is not to settle or close the discussion around the possible effects of generative AI on our information environment or dismiss all concerns around the technology. Instead, we hope to take the discussion further by injecting some healthy scepticism into this important debate about how best to regulate AI.’
The full paper, ‘Misinformation Reloaded? Fears about the Impact of Generative AI on Misinformation are Overblown’, authors Felix Simon, Oxford Internet Institute, University of Oxford; Dr Sacha Altay, University of Zurich and Hugo Mercier, Institut Jean Nicod is published in the Harvard Kennedy School Misinformation Review.
Media Contact
For a copy of the paper, interviews and briefings, please contact: Sara Spinks/Rosalind Pacey, Media and Communications Manager, Oxford Internet Institute 01865 287237 or press@oii.ox.ac.uk.
Research Funding
Felix M. Simon gratefully acknowledges support from the OII-Dieter Schwarz Scholarship at the University of Oxford. Sacha Altay received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement no. 883121). Hugo Mercier received funding from the Agence Nationale De La Recherche (ANR) (ANR-21-CE28-0016-01, as well as ANR-17-EURE-0017 to FrontCog, and ANR-10-IDEX-0001-02 to PSL).
About the OII
The Oxford Internet Institute (OII) is a multidisciplinary research and teaching department of the University of Oxford, dedicated to the social science of the Internet. Drawing from many different disciplines, the OII works to understand how individual and collective behaviour online shapes our social, economic and political world. Since its founding in 2001, research from the OII has had a significant impact on policy debate, formulation and implementation around the globe, as well as a secondary impact on people’s wellbeing, safety and understanding. Drawing on many different disciplines, the OII takes a combined approach to tackling society’s big questions, with the aim of positively shaping the development of the digital world for the public good. https://www.oii.ox.ac.uk/