MSc student Jessica Morley and Pofessor Luciano Floridi answers questions about their call for a new approach to ethical standards in Artificial Intelligence (AI), published in leading medical journal, The Lancet.

So, for the uninitiated, what is AI and how is it used in the healthcare sector?

Luciano: Put simply, AI is an umbrella term for a range of data science techniques that enable the completion of tasks that would require intelligence were they to be completed by a human. This does not mean that a machine, or software programme, completes the task in the same way that a human would. For example, a dishwasher can clean the dishes and I can clean the dishes, but the way that we complete the task is different. In the context of healthcare, we are typically talking about software applications of AI where, for example, an algorithm (interpreted as a set of instructions) is trained to be able to identify abnormalities in a scan or to recognise patterns in collections of symptoms typed into an app. This training is done by exposing the algorithm to large quantities of relevant data and either ‘teaching’ it to recognise normal vs. abnormal, or letting it ‘learn’ by itself.

These techniques themselves are not new, they have been around for decades, but with the decreasing costs of ever faster computation and the increasing use of digital health technologies, including apps or wearables, in all care settings there has been an increase in the amount of data generated and manipulated, leading to significant opportunities for utilising AI in healthcare settings. These applications of AI, if harnessed appropriately, could enable healthcare-providers to target the causes of ill-health and monitor the effectiveness of preventions and interventions.  For this reason, policymakers, politicians, clinical entrepreneurs, and computer and data scientists argue that AI will be a key part of healthcare in the future.

What the risks associated with the use of AI in healthcare?

Jess: The willingness to embrace AI in the potential future of healthcare is a positive development.  However, health-care providers should be mindful of the risks that arise from AI’s ability to change the intrinsic nature of how health-care is delivered.  Such potential AI transformations raise ethical risks that are related to the data (i.e. the evidence) used by an algorithm to make a decision, the impacts that the use of AI might have on the system, or the ability of regulators to understand what is going on and how to redress harms.

For example, historically people from within black or ethnic minority communities have received poorer treatment by the healthcare system and, as a result, they are associated with poorer outcomes. Giving an algorithm a dataset, with no context, that implies race leads to poorer outcomes could make that algorithm ‘learn’ that these people should be de-prioritised as they are less likely to benefit from medical intervention. This would clearly be a discriminatory practice, but regulators might have little oversight over how this has come to be and therefore how to address it.

How can we mitigate these risks?

Luciano: We need a bold and systematic approach to the implementation of AI solutions in healthcare that recognises the challenges and addresses them directly.  Crucially, the new approach must not rely solely on hard governance measures, such as statutory obligations, for the development of a robust regulatory system.  These measures are necessary but insufficient.  What is needed is a new, ethical focus on the end-users, their expectations, demands, needs and rights.

To date responses such as the NHS’s Code of Conduct for Data Driven Health and Care Technologies and other non-sector specific codes centre on the protecting the individual.  However, the epistemic normative and overarching ethical risks associated with AI use in health care also arise at the relationship, group, institutional and societal levels. For example, algorithms could misjudge the severity of an individual’s symptoms, biased training datasets could lead to disproportionately better or worse health outcomes for different groups of people, resulting in a loss of public trust.

Consequently, any ethical analysis of AI system by healthcare governing bodies must consider how potential ethical harms arise at different levels of analysis and at different stages of an algorithms life cycle. This stage by stage analysis is essential since the ethical impact of an algorithm can be altered in either direction at different junctures.

What steps should be put in place now to support the successful application of AI-based technologies in healthcare care systems?

Jess: We are calling for the creation of an internationally standardised and structured ethical review guideline, developed by an advisory committee, consisting of technologists, clinicians, bio and data ethicists, lawyers, human rights experts, and patient representatives.

The development of the new guideline should follow the best practice approaches set out by the Guideline International Network, enabling it to be added to the International Guideline Library and be adopted by health-care systems looking to implement AI.   The guideline should work in tandem with the legal requirements of existing regulations.  In addition, a mechanism also needs to be established for the regular review of this analysis to ensure it remains fit for purpose.

In its 20-2024 strategy, the World Health Organisation, states that it is developing an ethical guideline. We urge them to follow this approach.

What’s the ultimate goal for the successful adoption of AI solutions in healthcare?

Luciano: The ultimate ambition is to ensure that no single AI solution is procured and implemented until a robust ethical assessment of it has been completed, according to the stages outlined in the proposed new guideline.  Only once this is the case will health-care solutions be able to enjoy the dual advantages of ethical AI by capitalising on the opportunities while appropriately and proactively mitigating the risks to achieve the best possible outcomes for all.

The full comment by Floridi and Morley is published in The Lancet.