Skip down to main content

PRESS RELEASE -
Principles are no guarantee of ethical AI, says Oxford ethicist

Published on
4 Nov 2019
A leading expert in data ethics at the Oxford Internet Institute, believes the establishment of principles for the governance of Artificial Intelligence (AI) will not guarantee its trustworthy or ethical use by companies and organisations.

Dr Brent Mittelstadt’s paper ‘Principles alone cannot guarantee ethical AI’, published in the journal Nature Machine Intelligence, argues that a principled approach may not be the best way to approach ethical development and governance of AI.

Consensus has seemingly emerged around a set of ethical principles for AI that closely resemble the classic ethical principles of medicine, but there are several reasons to doubt that a principled approach will have comparable impact on AI development as it has historically had in medicine. The vast complexity of AI and a lack of common aims between developers and users suggest principles may be too vague and high-level to be workable.

To address these shortcomings, Dr Brent Mittelstadt calls for increased support for ‘bottom-up’ work on ethical AI. Companies must be prepared to disclose more about how they develop and audit AI systems, and work more openly with researchers and the public.

Dr Mittelstadt, Research Fellow, Oxford Internet Institute, said:

“We’ve seen at least 84 attempts from companies and organisations to develop principles for the ethical use of AI. While this trend is welcome, a simple copy and paste of ethical principles from the field of medicine will not be enough to ensure ethical and trustworthy deployment of AI technologies.

“Health professionals are heavily regulated, part of a common and established community, and most feel a moral obligation to their patients rather than institutions or companies.

“AI developers, while often well-meaning, will often prioritise commercial interests over public interests, and lack centuries of open and transparent data and case studies to inform their work. Greater accountability and simple changes like the increased involvement of ethicists and social scientists in AI development teams could make a significant impact.”

Notes for editors:

The full article is published in the journal Nature Machine Intelligence

Related Topics:

Privacy Overview
Oxford Internet Institute

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies
  • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

Google Analytics

This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

Enabling this option will allow cookies from:

  • Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains

These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.