Skip down to main content

LLMs in medicine require new forms of risk management, according to Oxford researchers

Doctor's hand touching a glowing circular interface with

LLMs in medicine require new forms of risk management, according to Oxford researchers

Published on
7 Jul 2025
Written by
Daria Onitiu
Whether it is merely long-term hype or near-term reality, Large Language Models (LLMs) in medicine are well-positioned to engage with a wide range of medical tasks and be used for medical research purposes.

LLMs use in modern day medicine 

Whether it is merely long-term hype or near-term reality, Large Language Models (LLMs) in medicine are well positioned to engage with a wide range of medical tasks and be used for medical research purposes.  For example, ChatGPT could be used to record and write clinical notes from doctor-patient consultations, whilst Med-PaLM2, a specialised, medical LLM, is already in use as a specialised LLM designed to ‘accurately and safely answer medical questions’.  

Risks and opportunities for LLMs 

The problem? Providers of general-purpose and specialised LLMs in medicine cannot ensure their safety and effectiveness from the outset, which can lead to a mismatch between patient and doctor expectations about device safety and effectiveness. These models contain inherent risks, such as issuing inaccurate advice or “hallucinating” responses.

Furthermore, these tools pose legal problems for providers to demonstrate risk management under the EU Medical Device Regulation (MDR).   These challenges are explored further in a new paper by Dr. Daria Onitiu, Professor Sandra Wachter, and Professor Brent Mittelstadt, Oxford Internet Institute, which sets out the risk profile of medical LLMs. 

The MDR risk management framework is a sequential process that enables providers to define, estimate, mitigate and monitor performance and safety risks. But this “forward-walking” approach clashes with how providers conduct effective risk management in practice. This leads to problems in articulating a device’s intended use, mitigating risks, and adapting the model to new use cases.

Effective governance of medical LLMs 

Instead of advocating for changing the MDR itself, the Oxford experts propose a new logic for providers that ensures the safety and effectiveness of medical LLMs within the existing MDR framework. This follows a logic that flows “backward”, prompting providers to consider different use cases, new risks and trade-offs to formulate the intended use of a medical LLM.  

Download the full paper “Walking Backward to Ensure Risk Management of Large Language Models in Medicine”, Daria Onitiu, Sandra Wachter and Brent Mittelstadt, published by the Journal of Law, Medicine & Ethics.  

Find out more about the work of Dr Daria Onitiu, Professor Sandra Wachter and Professor Dr Brent Mittelstadt. 

Related Topics:

Privacy Overview
Oxford Internet Institute

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies
  • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

Google Analytics

This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

Enabling this option will allow cookies from:

  • Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains

These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.