Skip down to main content
PRESS RELEASE -

AI standards essential to protect doctor-patient relationships and human rights – report

PRESS RELEASE -

AI standards essential to protect doctor-patient relationships and human rights – report

Published on
7 Jun 2022
Written by
Brent Mittelstadt
New report calls for robust AI standards to protect doctor-patient relationships and human rights.

Clear ethical standards and guidance must be in place for use of Artificial Intelligence in health settings to protect the relationship of trust between doctors and patients and to safeguard human rights, according to a Council of Europe report, written by an Oxford expert and published today.

It has been anticipated that AI could be used in clinical settings, even in direct communication with patients, as a diagnostic tool and even as a care agent. The report was commissioned by the Council of Europe Steering Committee for Human Rights in the fields of Biomedicine and Health and written by Dr Brent Mittelstadt, the Director of Research at the Oxford Internet Institute and a leading data ethicist, specialising in AI ethics. In it he investigates the potential impact of AI on the doctor-patient relationship and advises that use of AI remains ‘unproven’ and could undermine the ‘healing relationship’.

According to the report, ‘A radical reconfiguration of the doctor-patient relationship of the type imagined by some commentators, in which artificial systems diagnose and treat patients directly with minimal interference from human clinicians, continues to seem far in the distance.’

But, Dr Mittelstadt notes, ‘The doctor-patient relationship is a keystone of ‘good’ medical practice, and yet it is seemingly being transformed into a doctor-patient-AI relationship. The challenge…is to set robust standards and requirements for this new type of ‘healing relationship’ to ensure patients’ interests and the moral integrity of medicine as a profession are not fundamentally damaged by the introduction of AI.’

According to the report, with the introduction of AI, a shift could take place in the relationship, but not in the patient, ‘The patient’s vulnerability [is] not changed by the introduction of AI…what changes is the means of care delivery, how it can be provided, and by whom. The shift of expertise and care responsibilities to AI systems can be disruptive in many ways.’

AI poses unique challenges to the patient’s human rights, says the report. It highlights six potential risks:

  • Inequality in access to high quality healthcare;
  • Transparency to health professionals and patients;
  • Risk of social bias in AI systems;
  • Dilution of the patient’s account of well-being;
  • Risk of automation bias, de-skilling, and displaced liability; and
  • Impact on the right to privacy.

Concerns are voiced in the report about the impact of AI on transparency and informed consent. On a basic level: how should such systems explain themselves or be explained both to doctors and patients?

Dr Mittelstadt also notes that AI can introduce bias into a situation but detecting biases is not straightforward. He writes, ‘AI systems are widely recognised as suffering from bias… Biased and unfair decision-making often…reflects underlying social biases and inequalities. For example, samples in clinical trials and health studies have historically been biased towards white male subjects meaning results are less likely to apply to women and people of colour.’

There are also issues around professional standards, in the event AI is used. Careful considerable is essential, according to the report, in the role played by healthcare professionals, bound by professional standards, when incorporating AI systems that interact directly with patients. And the report underlines the challenges, and potential problems in the relationship between physicians and patients, including in terms of patients’ safety and protection, ‘If AI is used to heavily augment or replace human clinical expertise, its impact on the caring relationship is more difficult to predict…. The impact of AI on the doctor-patient relationship nonetheless remains highly uncertain’. And, it points out, ‘AI poses several unique challenges to the human right to privacy.’

The report calls for the Council of Europe to introduce ‘binding recommendations and requirements’ for how AI is deployed and governed, adding, ‘Recommendations should focus on a higher positive standard of care with regards to the doctor-patient relationship to ensure it is not unduly disrupted by the introduction of AI in care settings.’

Read the full report, “The Impact of Artificial Intelligence on the Doctor-Patient Relationship”, by Dr Brent Mittelstadt, Senior Research Fellow and Director of Research at the Oxford Internet Institute.

Watch the video presentation of the report by Dr Mittelstadt.  Hear Dr Mittelstadt explain how AI is used in healthcare both today and in the coming years, outline the main ethical and legal challenges of AI used in healthcare and highlight the key human rights implications for patients.

Related Topics