Skip down to main content

PRESS RELEASE -
New approach needed to address bias in machine learning and AI systems, say Oxford legal and AI ethics experts

Published on
8 Mar 2021
Leading experts in ethics and law from the Oxford Internet Institute (OII) believe that the ways AI systems are commonly designed to measure bias and fairness clash with the fundamental aims of EU non-discrimination law.

A new approach is needed to address bias in machine learning and AI systems, say Oxford legal and AI ethics experts.

Professor Sandra Wachter and Dr Brent Mittelstadt, leading experts in ethics and law from the Oxford Internet Institute (OII), University of Oxford, and Dr Chris Russell (Amazon), the coordinators of the Governance of Emerging Technologies Research Programme, believe that the ways AI systems are commonly designed to measure bias and fairness clash with the fundamental aims of EU non-discrimination law.

In their paper ‘Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non -Discrimination Law’, forthcoming in West Virginia Law Review,  the researchers explain how the normative idea and aim behind EU non-discrimination law is to actively dismantle inequality in society. Most fairness metrics clash with this idea because they take existing inequalities for granted. This is a problem because the status quo is far from neutral.

The researchers analysed 20 fairness metrics according to their compatibility with EU non-discrimination law and came up with a classification system based on how they treat existing inequalities in society (‘bias preserving’ and ‘bias transforming’ fairness metrics).

They conclude that 13 of them are bias preserving, meaning they ‘freeze’ the status quo. They argue that the use of ‘bias preserving’ fairness metrics requires legal justification if used to make decisions about people in Europe.

Associate Professor and Senior Research Fellow, Dr Sandra Wachter, Oxford Internet Institute, and lead author of the paper said:

“Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning.  Algorithms trained on biased data will, without intervention, produce biased outcomes and increase the inequality experienced by historically disadvantaged groups. We need to acknowledge the fact that the status quo is not neutral and instead use AI and statistical analysis to shed light on existing inequalities.”

Instead the authors advocate the use of ‘bias transforming’ metrics that better align with the aims of non-discrimination law. In contrast to other fairness metrics used in machine learning and AI, ‘bias transforming’ metrics do not blindly accept social bias as a given or neutral starting point that should be preserved. Instead they require developers and decision-makers to make an explicit decision as to which biases the system should exhibit, and thus which biases can be justified in the context of EU non-discrimination law.

Professor Wachter argues, “bias transforming metrics force designers and decision makers to confront fairness, and to consider the biases and inequalities in their data that would otherwise be ignored, hidden or treated as justified by bias preserving metrics. We recommend the usage of bias transforming metrics for machine learning which can be seen as a roadmap for societal change in the workplace, lending, education, criminal justice, health, insurance and other areas.”

In a previous paper, ‘Why Fairness Cannot be Automated’, the authors proposed a new way to measure fairness in AI: Conditional Demographic Disparity (CDD). In their latest paper they argue this bias transforming metric is the most compatible with the concepts of equality and illegal disparity as developed by the European Court of Justice. CDD was recently implemented by Amazon and is now available to all customers of Amazon Web Services.

Wachter concludes, “choosing a fairness metrics is not a neutral act.  You can still use bias preserving metrics in certain contexts. You just need to be aware of the underlying assumptions of the metric. By choosing a fairness metrics you are making an explicit decision on what type of inequality is acceptable and which are not.”

To help with this the paper concludes with a bias preservation checklist based on a series of questions to help developers and other users of AI, ML and automated decision-making systems choose appropriate fairness metrics in daily practice when designing such systems.

Dr. Mittelstadt explains that “Choosing a fairness metric is an ethically and legally significant choice that must be treated as such. Inequality cannot be solved simply through technological fixes. Rather, open and collaborative dialogue involving computer scientists and developers, lawyers, ethicists, regulators, the general public and others is essential to tackle this new manifestation of longstanding social problems.”

For more information contact press@oii.ox.ac.uk.

Note to Editors

Professor Wachter and Dr. Mittelstadt’s paper, ‘Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non -Discrimination Law’, is available as a pre-print and is forthcoming in West Virginia Law Review.

Related Topics