Skip down to main content

Uncovering Similarities in News Organisations’ AI Guidelines

Uncovering Similarities in News Organisations’ AI Guidelines

Published on
8 Sep 2023
Written by
Felix M. Simon
A new working paper finds that leading news organisations regulate the use of AI in journalism in similar ways. But a closer analysis of AI guidelines also reveals blind spots.

In the ever-evolving landscape of news reporting, the integration of Artificial Intelligence (AI) has recently taken centre stage. A significant catalyst for this transformation was the public debut of ChatGPT, a Large Language Model (LLM) by US-based OpenAI, in November 2022. This development has prompted many news organisations to turn their attention to AI, teetering on the edge of anticipation and apprehension. While the full impact of AI in journalism is still unfolding, many publishers are actively exploring the potential of the technology.

Yet, many of these uses carry risks. AI-powered recommendation engines can discriminate against certain groups of users. Texts produced by LLMs are prone to factual errors and distortions while AI-generated images may be mistaken as real by audiences. The list of problems is long. If newsrooms decide to publish AI output without taking precautions, they may be putting their journalistic credibility at risk.

In response, news organisations have started to draw up AI guidelines as one way of countering some of these issues and regarding the use of AI and to ensure the ethical use of AI. Yet, despite some pioneering work studying the content of such guidelines, questions remain. Amid calls to regulate AI more tightly, including in the news, how advanced are current efforts? Where do efforts converge or diverge and what are the blind spots?

We set out to study this question, looking at AI guidelines from 52 publisher in 12 countries around the world. In this post, we share our perspective on the key findings and implications of our new pre-print (which has not yet been peer-reviewed).

Uncovering Similarities

From the outset, we aimed to answer one key question: Do AI guidelines in news organisations resemble each other? Given that AI acts with broadly similar effects (and creates similar issues) across contexts, some similarity should be expected. The technology has also created a lot of uncertainty for organisations. In the face of uncertainty and ambiguity, organisations often tend to emulate those that have found success or have acted before them.

Our analysis indicates that the answer to this question is ‘yes.’ Despite the diversity of countries and contexts, a surprising degree of uniformity exists between these guidelines – not so much in the way they are structured and formulated but in how news organisations have decided to regulate the technology and ensure that it is used ethically.

Some differences remain, of course, remain. Publicly funded and commercial publishers seem to have somewhat different approaches, but it is difficult to make reliable arguments around this.

Shedding Light on Blind Spots

Our research also brought to light several critical blind spots in AI guidelines within the news industry, in a number of key areas.

  • Enforcement and Oversight: Many guidelines lacked teeth when it came to enforcing them or overseeing compliance. This raises the question how effective many of these will actually be.
  • Technological Dependency: Surprisingly, discussions on the potential impact of technological dependency on external providers of AI were absent, despite the potential risks such dependencies can pose for publishers.
  • Audience Engagement: Despite industry discussing about the need to engage with audiences, few guidelines mentioned soliciting audience feedback on AI use in journalism.
  • Sustainability and AI Supply Chains: Debates about sustainable AI and AI supply chains, and the environmental and societal implications of the technology, were largely missing.
  • Workplace Surveillance and Human Rights: Critical issues like workplace surveillance, data colonialism, labour exploitation, and potential human rights abuses tied to AI training, development and use were not addressed.

Implications and Open Questions

Our study raises critical questions for the future: Little is known how power dynamics among editorial, business, and tech teams shape AI guidelines within news organisations. We also lack knowledge around why some organisations embrace AI guidelines while others remain hesitant. And only the future will tell if AI guidelines evolve towards more standardisation or customisation – and if they will prove to be effective.

Nevertheless, AI guidelines can play a pivotal role in responsible and ethical AI integration in journalism. While they are not a panacea for all AI-related challenges, they can potentially provide a robust framework for the ethical use of AI in many news organisations.

The good news from our study: industry self-regulation on AI seems to be well underway. Leading publishers across countries and organisations are setting up strategies to address crucial AI aspects and they broadly address AI along similar lines. The practices of the early proponents of AI guidelines in news can become a model for others, paving the way for better AI practices across the news industry.

We believe that collaborative efforts, such as the ‘AI Charter in Media’ by Reporters Without Borders, can build upon our findings and address the identified shortcomings in AI guidelines. We also hope this research will serve as a useful resource for news organisations seeking to navigate the complex terrain of AI integration responsibly and ethically.

Further information

Read the full pre-print ‘Policies in Parallel? A Comparative Study of Journalistic AI Policies in 52 Global News Organisations,’ by Kim Björn Becker, F.A.Z. and University of Trier, Felix M. Simon, and Chris Crum, doctoral candidates, Oxford Internet Institute.

This paper was supported by the OII-DSF Programme on AI and Work and is part of the ‘AI, News, and the Transformation of the Public Arena’ project.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of these bodies.

Title image: DALL-E (‘An oil painting by Henry Matisse of a desk in a newspaper’s office. The desk is full with documents titled “AI Guidelines”, with a computer on the right side of the desk. [detailed, oil painting, colourful, on canvas]’)

Related Topics