Skip down to main content
PRESS RELEASE -

AI modelling tool developed by Oxford academics incorporated into Amazon anti-bias software

PRESS RELEASE -

AI modelling tool developed by Oxford academics incorporated into Amazon anti-bias software

Published on
21 Apr 2021
A new method to detect discrimination in AI and machine learning systems created by academics at Oxford Internet Institute, has been implemented by Amazon in their bias toolkit, ‘Amazon SageMaker Clarify’, for use by Amazon Web Services customers.

A new method to help better detect discrimination in AI and machine learning systems created by academics at the Oxford Internet Institute, University of Oxford, has been implemented by Amazon in their new bias toolkit, ‘Amazon SageMaker Clarify’, for use by Amazon Web Services customers.

In their paper, ‘Why Fairness Cannot Be Automated: Bridging the gap between EU non-discrimination law and AI’ Professor Sandra Wachter, Associate Professor, and Dr Brent Mittelstadt of the Oxford Internet Institute, University of Oxford and Dr Chris Russell, propose a new test for ensuring fairness in algorithmic modelling and data driven decisions, called ‘Conditional Demographic Disparity’ (CDD). The test is significant as it aligns with the approach used by courts across Europe in applying non-discrimination law.

The anti-discrimination test developed by the Oxford AI experts helps users look for bias in their AI systems and is particularly relevant for those seeking help in detecting unintuitive and unintended biases as well as heterogenous, minority-based and intersectional discrimination.

The Oxford test forms part of a wider suite of materials available to Amazon Web Services customers looking to detect bias in their datasets and their models, and for wider support in understanding how their models make predictions.

Associate Professor and Senior Research Fellow, Dr Sandra Wachter said,

“I’m incredibly excited to see our work being implemented by Amazon Web Services as part of their cloud computing offering. I’m particularly proud of the way our anti-bias test can be used to detect evidence of intersectional discrimination, which is an area that is often overlooked by developers of AI and machine learning systems.

“It’s less than a year since we published our paper, ‘Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination’ and it’s very rewarding to see our work having such impact.”

The Oxford experts have also achieved similar success in the field of explainable AI with their counterfactual explanations. Their method makes AI systems explainable and accountable in a meaningful way without the need to fully understand the logic of the algorithm and without infringing on trade secrets or IP rights.  It was embedded by Google into their popular What If interface for TensorFlow and GoogleCloud.

Download the paper ‘Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination’.
Watch Professor Wachter speaking about her co-authored paper.

Notes for editors

About Professor Sandra Wachter

Professor Sandra Wachter is an Associate Professor and Senior Research Fellow in Law and Ethics of AI, Big Data, and robotics as well as Internet Regulation at the Oxford Internet Institute at the University of Oxford. She is also a Fellow at the Alan Turing Institute in London, a Fellow of the World Economic Forum’s Global Futures Council on Values, Ethics and Innovation, a Faculty Associate at The Berkman Klein Center for Internet & Society at Harvard University.

About Dr Brent Mittelstadt

Dr Brent Mittlestadt is a Senior Research Fellow at the Oxford Internet Institute and British Academy Postdoctoral Fellow

About Dr Chris Russell

Dr Chris Russell is a Research Associate at the Oxford Internet Institute and Group Leader in Safe and Ethical AI at The Alan Turing Institute.

Related Topics