Dr Daria Onitiu
Research Associate
Dr Daria Onitiu researches and publishes on the legal, ethical and governance aspects surrounding Artificial Intelligence (AI) technologies, generative AI and deepfakes.
Whether it is merely long-term hype or near-term reality, Large Language Models (LLMs) in medicine are well positioned to engage with a wide range of medical tasks and be used for medical research purposes. For example, ChatGPT could be used to record and write clinical notes from doctor-patient consultations, whilst Med-PaLM2, a specialised, medical LLM, is already in use as a specialised LLM designed to ‘accurately and safely answer medical questions’.
The problem? Providers of general-purpose and specialised LLMs in medicine cannot ensure their safety and effectiveness from the outset, which can lead to a mismatch between patient and doctor expectations about device safety and effectiveness. These models contain inherent risks, such as issuing inaccurate advice or “hallucinating” responses.
Furthermore, these tools pose legal problems for providers to demonstrate risk management under the EU Medical Device Regulation (MDR). These challenges are explored further in a new paper by Dr. Daria Onitiu, Professor Sandra Wachter, and Professor Brent Mittelstadt, Oxford Internet Institute, which sets out the risk profile of medical LLMs.
The MDR risk management framework is a sequential process that enables providers to define, estimate, mitigate and monitor performance and safety risks. But this “forward-walking” approach clashes with how providers conduct effective risk management in practice. This leads to problems in articulating a device’s intended use, mitigating risks, and adapting the model to new use cases.
Instead of advocating for changing the MDR itself, the Oxford experts propose a new logic for providers that ensures the safety and effectiveness of medical LLMs within the existing MDR framework. This follows a logic that flows “backward”, prompting providers to consider different use cases, new risks and trade-offs to formulate the intended use of a medical LLM.
Download the full paper “Walking Backward to Ensure Risk Management of Large Language Models in Medicine”, Daria Onitiu, Sandra Wachter and Brent Mittelstadt, published by the Journal of Law, Medicine & Ethics.
Find out more about the work of Dr Daria Onitiu, Professor Sandra Wachter and Professor Dr Brent Mittelstadt.