Skip down to main content

AI will not revolutionise journalism, but it is far from a fad

AI will not revolutionise journalism, but it is far from a fad

Published on
6 Mar 2023
Written by
Felix M. Simon
The technology is a chance for the news if used wisely, argues Felix Simon, doctoral researcher, Oxford Internet Institute, University of Oxford.

The technology is a chance for the news if used wisely, argues Felix Simon, doctoral researcher, Oxford Internet Institute, University of Oxford.

If it were up to ChatGPT, the introduction to this piece would read as follows: ‘Many people seem to believe that artificial intelligence, specifically large language models like ChatGPT, will completely revolutionise the field of journalism. In reality, the impact of AI on journalism will be much more gradual and nuanced. It will vary depending on the specific application and the context in which it is used.’

Arguably, this is not quite as punchy as it could be, but admittedly its good enough. All it took was a prompt with some specifications in the form of bullet points, and voila, at the click of a button an introduction was born.

In the future, the little exercise I have performed here will be something journalists will probably do as routinely as making themselves a cup of coffee. They will generate summaries of articles already written, polish their writing style or create illustrations that can be used on the fly. In some cases, they already do (with mixed results, as the CNET example shows).

The view, however, that AI will completely revolutionise journalism and ‘disrupt the news’ is misguided. Having researched the adoption of artificial intelligence in international news organisations — including places like the BBC, the Washington Post or Germany’s public service broadcasters — for the better part of the last three years, it is both amusing and concerning to see the current discussion around generative AI unfold. Amusing, because we have been here before and many of these debates follow the typical patterns that can often be observed around new technologies (psychologist Amy Orben has described these dynamics very well here). Concerning, because we have seemingly learned little from the past.

Technological change does not work like a ‘deus ex machina’ and more crucially, it is never universal. The old survives alongside the new, sometimes outlasting it because it proves more useful. Where AI will replace existing journalistic work routines and technologies wholesale, where it will just be layered on top of them, and where it will not be used (or used and then discarded) in the news is, as of yet, hard to say. While I am more open now to the idea that AI could be a significant ‘difference that makes a difference’ in journalism than I was a few years ago, arguments that it will radically change journalism and news as we know it should be treated with a healthy dose of scepticism.

Ironically, the current hype around generative AI is, while not necessarily a sideshow, at least a little misleading. AI is not completely new territory for the news. For several years, the technology has slowly moved into news production and distribution, in most cases without readers (or journalists) really noticing. Nowadays, hardly any journalist still transcribes audio manually. The analysis of large data leaks, the fine-tuning of dynamic paywalls, or article recommendations — all this is increasingly underpinned by varieties of machine learning (one form of AI). If you are curious to see just how vast the existing and potential use cases are, Anna Hansen and colleagues provide a very good overview here. Large language models (LLMs) such as OpenAI’s ChatGPT and its artificial colleagues are the latest, ‘sexier’ iteration in a process that has been going on for some time. The bigger impact, however, will for now be behind the scenes

AI has arrived in journalism, and it is here to stay. The follow-up question ‘But do we want it to?’ is by now largely beside the point. The industry answer, as a recent report which polled news executives suggests, is clearly yes. 28 percent of the respondents answered that their organisation was using AI regularly, while 39 percent admitted that they were experimenting with it. Looking at recent examples of publishers scrambling for an AI strategy or making announcements, such as BuzzFeed, about integrating the technology more fully into their products emphasise the current industry dynamic: Publishers have voted with their feet and the irresistibility of the technology and the current hype will do the rest. The wider adoption of the technology is all but certain.

 

AI image

The Challenges of Bringing AI into News Work

And yet, the challenges abound. And in this context, LLMs are an instructive example. While ChatGPT’s results are impressive — spitting out everything from article summaries to a manual for removing a peanut butter sandwich from a VCR in the style of the King James Bible — its flaws are, unfortunately, just as egregious and well documented.

Large language models have no true understanding of the world. In the words of several leading AI researchers, including Timnit Gebru, they are stochastic parrots (although this analogy is also somewhat problematic, as e.g., philosopher Luciano Floridi points out: ‘parrots have an intelligence of their own that would be the envy of any AI but, above all, because LLMs synthesise texts in new ways, restructuring the contents on which they have been trained, not providing simple repetitions or juxtapositions.’) Yet, the overall point stands: LLMs do not ‘think’ as a human would but instead mimic our ability to communicate, producing output that is simply based on the predictions of “tokens” — educated guesses about what will likely come next in a sentence.

This alone makes them difficult companions for journalists. Accuracy and contextual understanding are the core of good journalism. Getting either from an LLM is still difficult. Citing their sources isn’t their strong suit, either. While various experts suggest that some of these flaws will be overcome with time, for now they stand in the way of a more widespread adoption in the news. Add to this concerns about their training, climate impact, plagiarism, liability issues, a greater dependency on platform companies, and how they might affect audiences’ trust and you have a good overview of why many news organisations will be (and are) careful before they fully integrate these and other AI systems into their work.

What is also easily forgotten amid the current AI frenzy is that news organisations will still pursue the same goals and have the same needs as they did before. What will change through AI is merely the way these are pursued. The task stays the same, but the arsenal of tools changes. This re-tooling of journalism will be nuanced and gradual, not least because there will be resistance and hesitancy from within (and for good reason).

Likewise, the impact of the technology will vary. The much-touted efficiency gains (which are difficult to quantify, depending on the context) will be easier to achieve where 100% accuracy is not the most important goal — e.g., in targeting readers with more content. It will be much harder in areas where this is required, for example in news writing. And not every news organisation will benefit from AI in the same way. Building customised AI systems is easier for large, well-resourced publishers who have the money and expertise to experiment with the technology and make it work for them. Many local newsrooms or smaller publishers will not be in such an enviable position.

Headlines shout with glee,
AI innovation blooms,
Tomorrow arrives.

(A ChatGPT Haiku on AI in the news)

The Need for the Responsible Adoption of AI

Yet, no matter how the adoption of AI in the news will play out in detail, news organisations have a responsibility to use the technology ethically, if they must use it in the first place. The potential risk for harm is real, both for publishers and the public.

Luckily for the public, a number of news organisations are thinking hard how AI can serve journalists and audiences, without causing harm — for example by unintentionally discriminating by replicating biases in datasets. One of these organisations is the BBC, whose Machine Learning Engine Principles are a source of inspiration for both public service and commercial news organisations internationally. Human oversight over AI is a key component of these guidelines, as is the regular review of any applications.

While all this might strike one as largely theoretical debate, it has real-world implications. News organisations are still important gatekeepers to the public sphere. At their best, they provide us — the public — with ‘accurate, accessible, diverse, relevant, and timely independently produced information about public affairs’, as the journalism scholar Rasmus Kleis Nielsen argues; information which helps us to make important decisions in our lives, from how to vote to when we should complain to the council about the roadworks that are dragging on.

At their worst, however, they do the exact opposite. AI will play an important part in this. Used the right way, it can help journalists and news organisations do more of what they do well. But if used without care, or with bad intentions, it can just as easily aid discrimination, amplify one-side views, or produce cheap infotainment that is not just annoying but outright misleading.

The history of technologies and their adoption teaches us that it is easier at the beginning to shape them and the way they are used. Once they gain their own momentum, this becomes increasingly difficult. For now, there is still time. It is up to news organisations to make the right choices in how they use this technology and prevent a race to the bottom — and up to us as a society that we demand that the right choices are being made.

If you work in a news organisation and would like to contribute to research on the topic, please feel free to reach out to Felix at felix.simon@oii.ox.ac.uk

About the Author:

Felix Simon

Felix M. Simon is a journalist, communication researcher, and doctoral student at the Oxford Internet Institute (OII), a Knight News Innovation Fellow at Columbia University’s Tow Center for Digital Journalism, and an affiliate at the Center for Information, Technology, and Public Life (CITAP) at the University of North Carolina at Chapel Hill. He also works as a research assistant at the Reuters Institute for the Study of Journalism (RISJ) .

Related Topics