Skip down to main content

Thinking around Corners: How to Regulate AI-based medical devices effectively

Thinking around Corners: How to Regulate AI-based medical devices effectively

Published on
8 Dec 2023
Written by
Daria Onitiu, Brent Mittelstadt and Sandra Wachter
Instead of fixating on isolated risks and regulatory gaps, it is crucial to focus on how developers are making decisions about safety and performance issues in medical AI.

Instead of fixating on isolated risks and regulatory gaps, it is crucial to focus on how developers are making decisions about safety and performance issues in medical AI.

EU AI governance is set to suffer a stress test from the layers of complexity involved in regulating “high-risk” systems covered by sectoral legislation, such as the EU Medical Device Regulation and questions of legitimacy which focus on how policymakers should approach foundation models and general purpose systems under the EU AI Act.

Medical AI applications in particular – from continuously learning algorithms for diagnosis to Large Language Models as patient interfaces – point towards a growing need for more comprehensive and effective risk mitigation strategies across the lifecycle of AI systems.

In a new paper “How AI challenges the Medical Device Regulation: Patient safety, benefits, and intended uses”, currently available as a pre-print, Dr. Daria Onitiu, Professor Sandra Wachter, and Professor Brent Mittelstadt assess the challenges for AI risk management through the lens of patient benefit. The paper examines motivations for regulatory alignment with new technologies in the context of the EU Medical Device Regulation, focusing on how developers can shape the expected benefits of AI technologies, make necessary trade-offs to minimise risks, and identify their short-term and long-term impacts.

The authors challenge the basis on which manufacturers may address system safety and performance, specifically how assurances are made about fairness, interpretability, and usability in AI-based medical devices. A key aspect is the manufacturer’s statements about a device’s “intended use.” When considered alongside the general lack of specific practical requirements emerging from current technical standardisation processes for (medical) AI, the sole focus in these statements on “patient safety” but not “patient benefit” is concerning.

Recognising this key vulnerability in the risk assessment requirements of the EU Medical Device Regulation reveals concerns about how patient safety will be maintained in practice. This gap demands consideration by developers, policymakers, and industry when framing and measuring the practical utility of medical AI devices in patient-centred care.

While the paper primarily focuses on the applicability of the EU Medical Device Regulation to AI, the findings serve as a crucial reference point for other emerging technologies technological and related discussions concerning the EU AI Act. Recognising the challenges in balancing risks and benefits in technology, observers of these discussions have acknowledged the impact of the social-political climate during negotiations and the trilogue phase.

Focusing on the regulation of foundation models at large, the crux lies not in how to make the AI industry abide by hard rules, but rather understanding their approach to risk assessment. The timing and scope of legal intervention become paramount and should steer discussion away from dichotomies such as innovation versus self-regulation or formal regulation. Understanding the mindset of developers grappling with aspects of AI safety, performance, and fundamental rights is crucial, especially in the current landscape where there is a rush to use frontier AI systems like Large Language Models in high-risk scenarios while their utility and benefits remain unproven.

Related Topics