Skip down to main content

The Elephant in the Room: How AI could give technology giants more control over the news

The Elephant in the Room: How AI could give technology giants more control over the news

Published on
23 May 2022
Written by
Felix M. Simon
How AI could give technology giants more control over the news, Felix M. Simon, doctoral candidate, Oxford Internet Institute, explains more.

The Elephant in the Room: How AI could give technology giants more control over the news

Felix M. Simon, doctoral candidate, Oxford Internet Institute, explains more.

AI and its use in the news media

If you attended any news conference in recent years, one thing will have stuck out: Almost everyone seems to be excited by the possibilities of using artificial intelligence for journalism. And indeed, AI looms large in modern-day newswork. Investigations like the ‘Pandora Papers’ on the offshore secrets of wealthy elites would not have been possible without the help of machine learning.

Elsewhere, AI helps content moderators do their job more efficiently, making the comment sections of news websites slightly less gruelling spaces to be. Journalists the world over nowadays rely on AI to quickly translate or transcribe text and audio as part of their everyday reporting. Meanwhile, on the business side of how news organisations function, AI informs the work of those trying to better understand news audiences, while others use it to fine-tune the machinations of dynamic paywalls. In short: AI is already all over the news and its use is only likely to increase.

Yet, there is a catch—one that has gone largely unacknowledged by the media and its consumers. All this work with AI in the news comes at a cost. Most of the AI tools, services, or the infrastructure required to develop and run them are not owned by news organisations. Instead, they are often concentrated in the hands of a few powerful technology companies such as Google, Microsoft or Amazon Web Services (AWS).

This, I argue, is the proverbial elephant in the room when it comes to AI in the news. As I explain in my new paper Uneasy Bedfellows: AI in the News, Platform Companies and the Issue of Journalistic Autonomy’, published in the journal Digital Journalism, the roll-out of AI in the news risks shifting even more control to major platform companies, making the news industry even more dependent on them than they are already.

Controlling the means of connection and production

To date, platform companies’ power over news organisations has mainly stemmed from their control over the online advertising market and the channels of distribution. They are important gateways for audiences enabling consumers to access to news with search engines and social media increasing news media’s reach and traffic whilst shaping the flow of attention online. Platform companies also exert soft power by funding journalism projects and research* and they are often involved in lobbying efforts.

AI, however, adds a new lever of control. Suddenly, platform companies no longer have only ‘relational power’ in that they mainly ‘control the means of connection’ (especially to audiences) as scholars Sarah Anne Ganter and Rasmus Nielsen argue in their new book ‘The Power of Platforms’.

On the contrary, in the case of AI, the platform companies increasingly provide infrastructure, services, and tools that matter for all sides of the news organisation, allowing them to increasingly control both the means of production and connection. Importantly, they do so through a technology for which various structural factors have created both high barriers to entry and afforded them market dominance, thus making it difficult for news organisations to avoid them or compete with them in this space, let alone operate artificial intelligence applications without relying on their hardware, software, data, and expertise.

An AI-generated triptych of images based on the text prompt ‘An elephant representing major technology companies standing in a room, painted in the style of Salvador Dalí’ (made with Wombo.art) and ‘Explainable AI’ by Alexa Steinbrück (Better Images of AI/CC-BY 4.0)

What does this mean for the news and why is it problematic?

The question beckons, why this is problematic. As I argue in the paper, several risks flow from this increasing dependence of news organisations on platform companies, which I set out below.

Firstly, there’s a risk of vendor lock-in where news organisations become so structurally dependent on AI provided by platform companies that they are unable to switch to another vendor without incurring substantial costs.

Secondly, platforms possess artefactual and contractual control over their AI infrastructure and services. They set the boundaries for what is, and more importantly what is not possible. They can also change this infrastructure at will, without news organisations necessarily having the ability to intervene or resist. This could expose them to the risk that their AI solutions, once implemented at various stages in news production and distributed, are subject to unforeseen changes or, in a worst-case-scenario, stop to function entirely.

Thirdly, AI provided by platform companies could make it even more difficult for news organisations to understand why certain decisions or predictions are being made by these systems, forcing them to rely on platform companies themselves around questions of accuracy or fairness in any results.

Fourth, by controlling a technology that is increasingly embedded at various points in news organisations, platform companies could gain a deeper understanding about their workings or get access to sensitive data, thus limiting news organisations’ ability to protect sources or proprietary business information.

Further implications for news organisations

How platform companies might make use of this control—if at all—is currently unclear. The fact remains that it is not beyond them to manipulate their infrastructure to their own advantage vis-à-vis publishers, however, is well documented. Taken together, these incremental shifts in the focus of control have potential implications cumulatively for the autonomy of the news we all consume—the normative ideal that the news should be free from any undue influence so that it can fulfil a watchdog function. By handing over central functions of news work to AI controlled by large platform companies, news organisations risk losing control over some of the day to day operational decision-making and long-term strategic thinking involved in running them.

What is more, the increasing use AI in the news also risks skewing news organisations stronger toward the values and logics of platform companies encoded into these systems, such as greater quantification or commercialisation, which might be in conflict with values that have traditionally motivated news organisations. In the past, following platforms’ push to adopt social media for distribution and audience engagement has strained some news organisations’ relationships with their core audiences or has bound resources that could have been invested into e.g., deeper coverage. Using AI from platform companies in the news may end up calcifying such tendencies—a  development that could ultimately also restructure the public arena—the common but contested mediated space vital for democracy to which news organisations act as important gatekeepers.

Where do we go from here?

Platform companies already hold great sway over the news, shaping what we, as news consumers, get to see on their platforms. With AI, they can theoretically expand this control even further. Yet, many open questions remain, and only careful empirical examination will ultimately allow us to see the scale of this problem and what news organisations need to do to address it.

As AI becomes increasingly enmeshed in news organisations, our focus should shift from merely asking about direct effects of the technology (‘More efficiency, more subscriptions, journalists might feel alienated…’) to the larger, structural implications.

There is still much more work to be done and this paper is just the start, throwing up more questions than what it provides in answers. My hope is, however, that it will act as a roadmap for others in this space and add to the broader debate as we collectively seek to make sense of the role of AI in the news. There’s much to be explored. Let’s get to work!

Further information

Read the full paper ‘Uneasy Bedfellows: AI in the News, Platform Companies and the Issue of Journalistic Autonomy’, by Felix M. Simon, doctoral candidate, Oxford Internet Institute, published in Digital Journalism.

*The author has in the past been employed on projects funded by Google and Facebook in his role as research assistant at the Reuters Institute for the Study of Journalism.

Felix M. Simon is a journalist, communication researcher, and doctoral student at the Oxford Internet Institute (OII) and a Knight News Innovation Fellow at Columbia University’s Tow Center for Digital Journalism. He also works as a research assistant at the Reuters Institute for the Study of Journalism (RISJ) and regularly writes and comments on technology, media, and politics for various international outlets.

This paper work was supported by the Leverhulme Trust, a Knight News Innovation Fellowship at Columbia University’s Tow Centre for Digital Journalism and the Minderoo-Oxford Challenge Fund in AI Governance. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of these bodies.

** Banner image: An AI-generated image based on the text prompt ‘Artificial intelligence, AI, newsroom, Google, Facebook, technology companies, elephant, room.’  Made with Wombo.art.

Related Topics