Skip down to main content
PRESS RELEASE -

Large Language Models pose a risk to society and need tighter regulation, say Oxford researchers

PRESS RELEASE -

Large Language Models pose a risk to society and need tighter regulation, say Oxford researchers

Published on
7 Aug 2024
Written by
Sandra Wachter, Brent Mittelstadt and Chris Russell
Leading experts in regulation and ethics at the Oxford Internet Institute, part of the University of Oxford, have identified a new type of harm created by LLMs which they believe poses long-term risks to democratic societies and needs to be addressed

Large Language Models pose a risk to society and need tighter regulation, say Oxford researchers

Leading experts in regulation and ethics at the Oxford Internet Institute, part of the University of Oxford, have identified a new type of harm created by LLMs which they believe poses long-term risks to democratic societies and needs to be addressed by creating a new legal duty for LLM providers.

In their new paper ‘Do large language models have a legal duty to tell the truth?’, published by the Royal Society Open Science, the Oxford researchers set out how LLMs produce responses that are plausible, helpful and confident but contain factual inaccuracies, misleading references and biased information.  They term this problematic phenomenon as ‘careless speech’ which they believe causes long-term harms to science, education and society.

Explains lead author Professor Sandra Wachter, Professor of Technology and Regulation, Oxford Internet Institute;

“LLMs pose a unique risk to science, education, democracy, and society that current legal frameworks did not anticipate.  This is what we call ‘careless speech’ or speech that lacks appropriate care for truth.  Spreading careless speech causes subtle, immaterial harms that are difficult to measure over time. It leads to the erosion of truth, knowledge and shared history and can have serious consequences for evidence-based policy-making in areas where details and truth matter such as health care, finance, climate change, media, the legal profession, and education.

“In our new paper, we aim to address this gap by analysing the feasibility of creating a new legal duty requiring LLM providers to create AI models that, put simply, will ‘tell the truth’.”

This phenomenon of ‘careless speech’ is further complicated by human feedback that often favours outputs that align with their personal biases, and annotations that train models to generate ‘assertive sounding outputs’, among other factors unrelated to advancing truthful outputs.

Adds Associate Professor and Research Associate Dr Chris Russell, Oxford Internet Institute, “While LLMs are built so that using them feels like a conversation with an honest and accurate assistant, the similarity is only skin deep, and these models are not designed to give truthful or reliable answers  The apparent truthfulness of outputs is a ‘happy statistical accident’ that cannot be relied on.”

To better understand the legal restrictions faced when using LLMs, the researchers carried out a comprehensive analysis, assessing the existence of truth-telling obligations in the current legal frameworks such as the Artificial Intelligence Act, the Digital Services Act, Product Liability Directive and the Artificial Intelligence Liability Directive.  They find that current legal obligations tend to be limited to specific sectors, professions or state institutions and rarely apply to the private sector.

Commenting on the findings, Director of Research, Associate Professor Brent Mittelstadt said,

“Existing regulations provide weak regulatory mechanisms to mitigate careless speech and will only be applicable to LLM providers in a very limited range of cases.  Nevertheless, in their attempts to eliminate ‘hallucinations’ in LLMs, companies are placing significant guardrails and limitation on these models.  This creates a substantial risk of further centralizing power in a few large tech companies to decide which topics are appropriate to discuss or off limits, which information sources are reliable, and ultimately what is true”.

The Oxford academics argue that LLM providers should better align their models with truth through open, democratic processes.  They propose the creation of a legal duty for LLM providers to create models that prioritize the truthfulness of outputs above other factors like persuasiveness, helpfulness or profitability.  Among other things, this would mean being open about the training data they use and the limitations of their models, explaining how they fine-tune models through practices such as reinforcement learning from human feedback or prompt constraints, and building in fact checking and confidence scoring functions into outputs.

Concludes Professor Wachter, “Current governance incentives focus on reducing the liability of developers and operators and on maximising profit, rather than making the technology more truthful.  Our proposed approach aims to minimise the risk of careless speech and long-term adverse societal impact whilst redirecting development towards public governance of truth in LLMs.”

Download the full paper, “’Do large language models have a legal duty to tell the truth?’, by Sandra Wachter, Brent Mittelstadt and Chris Russell published by the Royal Science Open Science.

Notes for editors:

Media information

Media information: For more information or to request an interview with the authors call +44 (0)1865 287 210 or contact press@oii.ox.ac.uk.

About the article

The full article ‘Do large language models have a legal duty to tell the truth?’, by Sandra Wachter, Brent Mittelstadt and Chris Russell is published by the Royal Society.

About the OII

The Oxford Internet Institute (OII) is a multidisciplinary research and teaching department of the University of Oxford, dedicated to the social science of the Internet. Drawing from many different disciplines, the OII works to understand how individual and collective behaviour online shapes our social, economic and political world. Since its founding in 2001, research from the OII has had a significant impact on policy debate, formulation and implementation around the globe, as well as a secondary impact on people’s wellbeing, safety and understanding. Drawing on many different disciplines, the OII takes a combined approach to tackling society’s big questions, with the aim of positively shaping the development of the digital world for the public good. https://www.oii.ox.ac.uk/

Related Topics