Principles are no guarantee of ethical AI, says Oxford ethicist
4 November 2019
Dr Brent Mittelstadt’s paper ‘Principles alone cannot guarantee ethical AI’, published in the journal Nature Machine Intelligence, argues that a principled approach may not be the best way to approach ethical development and governance of AI.
Consensus has seemingly emerged around a set of ethical principles for AI that closely resemble the classic ethical principles of medicine, but there are several reasons to doubt that a principled approach will have comparable impact on AI development as it has historically had in medicine. The vast complexity of AI and a lack of common aims between developers and users suggest principles may be too vague and high-level to be workable.
To address these shortcomings, Dr Brent Mittelstadt calls for increased support for ‘bottom-up’ work on ethical AI. Companies must be prepared to disclose more about how they develop and audit AI systems, and work more openly with researchers and the public.
Dr Mittelstadt, Research Fellow, Oxford Internet Institute, said:
“We’ve seen at least 84 attempts from companies and organisations to develop principles for the ethical use of AI. While this trend is welcome, a simple copy and paste of ethical principles from the field of medicine will not be enough to ensure ethical and trustworthy deployment of AI technologies.
“Health professionals are heavily regulated, part of a common and established community, and most feel a moral obligation to their patients rather than institutions or companies.
“AI developers, while often well-meaning, will often prioritise commercial interests over public interests, and lack centuries of open and transparent data and case studies to inform their work. Greater accountability and simple changes like the increased involvement of ethicists and social scientists in AI development teams could make a significant impact.”
Notes for editors:
The full article is published in the journal Nature Machine Intelligence