Journalism faces a decline of traditional business models. News leaders are increasingly pressured to reorient toward data-driven logics. Many news organisations now bet big on AI investments, hoping that the technology can generate additional revenue or free up staff time. But problems emerge: Some journalists fear being replaced with AI; there are possible frictions between journalistic values and the values encoded into AI systems and infrastructures; and little is known about the impact of AI on the news and the health of our public discourse. AI also poses the risk of making news organisations even more reliant on the technology and platforms companies that dominate in AI development–potentially aggravating the economic problems that news organisations face.
Starting in March 2021, the project ‘AI in the News: Reshaping our Information Ecosystem’ at the Oxford Internet Institute investigated these and related questions, generously funded by Oxford University’s Research Centre in the Humanities (TORCH) and the Minderoo-Oxford Challenge Fund in AI Governance and with administrative support from the Oxford Internet Institute and Balliol College.
The aim of the project, led by Felix M. Simon was to identify key issues in this space, collect evidence, and start a conversation among academic and industry leaders about those issues. These efforts culminated in a public symposium held at Balliol College on 25th May 2023, which sought to foster discussions between industry experts, academics, and students on the key issues identified during the active research phase.
Held under the Chatham House Rule, participants included leading experts on AI and the news, including:
- Shreya Vaidyanathan, Product Manager at Bloomberg LP
- Jane Barrett, Global Editor for Media News Strategy at Reuters
- Siddharth Venkataramakrishnan, Banking and Fintech Correspondent at the Financial Times
- Melissa Heikkilä, Senior Reporter for AI at MIT Technology Review
- Nic Newman, Senior Research Associate at the Reuters Institute for the Study of Journalism
- David Caswell, Executive Product Manager at the BBC
- Tom Standage, Deputy Editor at The Economist
This report, co-authored by Felix M. Simon and Luisa Fernanda Isaza-Ibarra provides a summary of the main themes that emerged during the symposium and outlines a few recommendations as well as blind spots to be addressed in future research.
Key findings are:
- AI systems already play a concealed but significant role in shaping the information landscape. They filter, curate, rank, and moderate content on platforms and search engines. Increasingly, news organisations, too, use AI for journalistic tasks and in distribution processes.
- The implementation of these AI systems often requires significant time and resource investments, and their development may initially be inefficient. While they promise greater efficiency and productivity for news- work, this is not a foregone conclusion as AI-generated work may require e.g., additional editing or careful supervision. The impact of AI on efficiency and productivity varies depending on the task and context.
- Other challenges in AI integration in news organisations include the risk of unwanted biases, privacy and intellectual property concerns, and the disruption of newsroom dynamics. Many news organisations prioritise reliability and trustworthiness, setting limits on AI use.
- AI adoption by news organisations may further increase the influence of large technology companies. Large platform companies are akin to ‘landlords’ who shape and control large parts of the AI ecosystem, while smaller start-ups resemble ‘tenants’, who are both reliant on these larger firms while also pursuing their own interests.
- The future of the information environment is uncertain, as the integration of AI is still in its early stages and depends on decisions made by technology companies, news organisations, regulatory bodies, and public acceptance and usage of AI systems. News organisations have an opportunity to reinforce their position as trusted brands in the face of widespread AI use by emphasising the human element in journalism and responsibly using AI.
Key recommendations for news organisations, regulators, and other AI stakeholders are:
- Emphasise responsible use of AI by implementing ethical guidelines, human oversight, transparency policies, and internal auditing processes while considering the need for regulatory efforts.
- Scrutinise old and new technology companies to understand their motives and actions, as their agendas may not align with those of news publishers or those of the public.
- Increase investment in R&D initiatives and comprehensive training programs to adapt to an evolving media and technology landscape and maintain a competitive edge.
- Strengthen industry collaboration by identifying common areas of concern and involving smaller and international publishers to address winner-takes-most dynamics within the industry. Foster collaboration between news organisations and researchers to leverage the potential benefits of AI for journalism and overcome barriers to collaboration.
- Consider local and regional perspectives to help create a balanced and inclusive media ecosystem, preserving diverse perspectives and local voices. This extends to the inclusion of non-Western newsrooms in discussions, collaborations, and research efforts to address their unique challenges and ensure a global perspective.
- Actively involve audiences and the public in developing and using AI, as their exclusion may lead to fragmented news consumption experiences and diminished trust and engagement.
Please cite as: Simon, F. M., & Isaza-Ibarra, L. F. (2023). AI in the news: Reshaping the information ecosystem? (p. 24). Oxford Internet Institute, University of Oxford. http://dx.doi.org/10.5287/ora-dx865edma