Skip down to main content
PRESS RELEASE -

Large Language Models pose risk to science with false answers, says Oxford study

PRESS RELEASE -

Large Language Models pose risk to science with false answers, says Oxford study

Published on
20 Nov 2023
Written by
Brent Mittelstadt, Sandra Wachter and Chris Russell
Large Language Models (LLMs) pose a direct threat to science, because of so-called ‘hallucinations’ and should be restricted to protect scientific truth, says a new paper from leading AI researchers at the Oxford Internet Institute.

Large Language Models (LLMs) pose a direct threat to science, because of so-called ‘hallucinations’ (untruthful responses),  and should be restricted to protect scientific truth, says a new paper from leading Artificial Intelligence researchers at the Oxford Internet Institute. 

The paper by Professors Brent Mittelstadt, Chris Russell and Sandra Wachter has been published in Nature Human Behaviour.  It explains, ‘LLMs are designed to produce helpful and  convincing responses without any overriding guarantees regarding their accuracy or alignment  with fact.’ 

The reason for this, is that the data the technology uses to answer questions does not always come from a factually correct source. LLMs are trained on large datasets of text, usually taken from online sources. These can contain false statements, opinions, and creative writing amongst other types of non-factual information. 

Prof Mittelstadt explains, ‘People using LLMs often trust them like a human information source. This is, in part, due to the design of LLMs as helpful, human-sounding agents that converse with users and answer seemingly any question with confident sounding, well-written text. The result of this is that users can easily be convinced that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.’

To protect science and education from the spread of bad and biased information, the authors argue, clear expectations should be set around what LLMs can responsibly and helpfully contribute. According to the paper, ‘For tasks where the truth matters, we encourage users to write translation prompts that include vetted, factual Information.’

Prof Wachter says, ‘The way in which LLMs are used matters. In the scientific community it is vital that we have confidence in factual information, so it is important to use LLMs responsibly. If LLMs are used to generate and disseminate scientific articles, serious harms could result.’

Prof Russell adds, ‘It’s important to take a step back from the opportunities LLMs offer and consider whether we want to give those opportunities to a technology, just because we can.’

LLMs are currently treated as knowledge-bases, and used to generate information in response to questions. This makes the user vulnerable both to regurgitated false information that was present in the training data, and to ‘hallucinations’  – false information spontaneously generated by the LLM that was not present in the training data.

To overcome this, the authors argue that LLMs should instead be used as `zero-shot translators’. Rather than relying on the LLM as a source of relevant information, the user should simply provide the LLM with appropriate information and ask it to transform it into a desired output. For example: rewriting bullet points as a conclusion or generating code to transform scientific data into a graph.

Using LLMs in this way makes it easier to check that the output is factually correct and consistent with the provided input. 

The authors acknowledge that the technology will undoubtedly assist with scientific workflows but are clear that scrutiny of its outputs is key to protecting robust science.

Download the full paper, ‘To protect science we must use LLMs as zero-shot translators’, lead author Director of Research, Associate Professor and Senior Research Fellow, Dr Brent Mittelstadt, Oxford Internet Institute.  Co-authors, Professor Sandra Wachter, Oxford Internet Institute and Dieter Schwarz Associate Professor, AI, Government & Policy., Research Associate, Chris Russell, Oxford Internet Institute. 

Notes for Editors: 

The research publication was supported through research funding provided by the Wellcome Trust (grant nr 223765/Z/21/Z), Sloan Foundation (grant nr G-2021-16779), Department of Health and Social Care, and Luminate Group. Their funding supports the Trustworthiness Auditing for AI project and Governance of Emerging Technologies research programme at the Oxford Internet Institute, University of Oxford.

About the OII

The Oxford Internet Institute (OII) is a multidisciplinary research and teaching department of the University of Oxford, dedicated to the social science of the Internet. Drawing from many different disciplines, the OII works to understand how individual and collective behaviour online shapes our social, economic and political world. Since its founding in 2001, research from the OII has had a significant impact on policy debate, formulation and implementation around the globe, as well as a secondary impact on people’s wellbeing, safety and understanding. Drawing on many different disciplines, the OII takes a combined approach to tackling society’s big questions, with the aim of positively shaping the development of the digital world for the public good. https://www.oii.ox.ac.uk/

About the University of Oxford

Oxford University has been placed number one in the Times Higher Education World University Rankings for the seventh year running, and ​number two in the QS World Rankings 2022. At the heart of this success are the twin-pillars of our ground-breaking research and innovation and our distinctive educational offer. Oxford is world-famous for research and teaching excellence and home to some of the most talented people from across the globe. Our work helps the lives of millions, solving real-world problems through a huge network of partnerships and collaborations. The breadth and interdisciplinary nature of our research alongside our personalised approach to teaching sparks imaginative and inventive insights and solutions.

Media contact:

Sara Spinks/Roz Pacey, Media and Communications Manager

T: 01865 280528 E: press@oii.ox.ac.uk

Related Topics