Skip down to main content

Automating Democracy: Generative AI, Journalism, and the Future of Democracy

By Amy Ross Arguedas and Felix M. Simon
Cover of Automating Democracy: Generative AI, Journalism, and the Future of Democracy

Sophisticated AI systems are increasingly everywhere. However, 2023 will likely prove to be a particularly critical moment in the history of AI. Ever since the public release of ChatGPT, a so-called Large Language Model (LLM), in December 2022 by the US start-up OpenAI, we are witnessing a proliferation of a form of AI that has been labelled ‘Generative AI’ due to the ability of these systems to create seemingly everything from realistic text to images. A new ‘AI race’ followed, with technology companies vying to get a piece of the cake by building and releasing their own models, business attempting to capitalise on AI, and regulators and civil society wondering how they can put guardrails in place.

Powerful and technologically impressive as some of these developments are, they also raise important questions about their democratic impact. Up until now, we could take for granted humans’ central role in shaping democratic deliberation and culture. But what does it mean for the future of democracy if humans are increasingly side-lined by AI? Does it matter if news articles, policy briefs, lobbying pieces, and entertainment are no longer created solely by humans? How will an increasingly automated journalism and media culture affect democratic participation and deliberation? How can we protect democratic values, like public deliberation and self-governance, in societies which stand to be reshaped through AI? And how might these new technologies be used to promote democratic values?

To investigate this situation and to gauge the opinions of experts and academics, the Balliol Interdisciplinary Institute project ‘Automating Democracy: Generative AI, Journalism, and the Future of Democracy’ convened a group of experts for a public symposium at Balliol College Oxford in collaboration with the Institute for Ethics in AI and the Oxford Internet Institute. The aim of the symposium, organised jointly by Dr Linda Eggert, an Early Career Fellow in Philosophy, and Felix M. Simon, a communication researcher and DPhil student at the Oxford Internet Institute, was to identify key issues in this space and start a conversation among academics, industry experts, and the public about the questions outlined above.

This report, co-authored by Dr Amy Ross Arguedas of the Reuters Institute for the Study of Journalism and Felix M. Simon, summarises the main themes that emerged during the symposium and outlines a list of open questions to address in future research and discussions.

Key findings:

  • The creation, implementation, and regulation of LLMs raises important questions about power and inequalities.
  • The application of LLMs developed largely in the Global North to other contexts poses important technical, ethical, and legal challenges.
  • The potential of personalisation through LLMs can be a double-edged sword.
  • While automation has been used for news recommendation and distribution for some time, media organisations are increasingly experimenting with it for news production. But Generative AI creates opportunities and risks for news organisations.
  • The regulation of generative AI needs to be done in a manner that is both democratic and global in scale.

Citation:

Arguedas, A. R., & Simon, F. M. (2023). Automating Democracy: Generative AI, Journalism, and the Future of Democracy (p. 21). Balliol Interdisciplinary Institute, University of Oxford.

Details

Related Topics