Skip down to main content
PRESS RELEASE -

Current discrimination laws failing to protect people from AI-generated unfair outcomes

PRESS RELEASE -

Current discrimination laws failing to protect people from AI-generated unfair outcomes

Published on
26 May 2022
Written by
Sandra Wachter
New paper from Oxford academic calls for changes in current laws to protect the public from AI-generated unfair outcomes.

AI creates unintuitive and unconventional groups to make life-changing decisions, yet current laws do not protect members of online algorithmic groups from AI-generated unfair outcomes, says a new paper from a leading Oxford academic.

A paper from Professor Sandra Wachter at the Oxford Internet Institute published today, reveals that the public is increasingly the unwitting subject of new, worrying forms of discrimination, due to the growing use of Artificial Intelligence (AI).

For example, using a certain type of web browser such as Internet Explorer or Safari can result in a job applicant being less successful when applying online. Candidates in online interviews may be assessed by facial recognition software that tracks facial expressions, eye movement, respiration or sweat.

The paper argues there is an urgent need to amend current laws to protect the public from this emergent discrimination through the increased use of Artificial Intelligence. In ‘The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law’ author, Professor Sandra Wachter, highlights that AI is creating new digital groups in society – algorithmic groups – whose members are at risk of being discriminated. These individuals should be protected by reinterpreting existing non-discrimination law, she argues, and outlines how this could be achieved.

AI-related discrimination can occur in very ordinary, everyday activities with individuals having little awareness. In addition to job applications, other scenarios include applying for a financial loan where an applicant is more likely to be rejected if they use only lower-case letters when completing their digital application – or if they scroll too quickly through the application pages.

The paper highlights that these new forms of discrimination often do not fit into the traditional norms of what is currently considered discrimination and prejudice. AI challenges our assumptions about legal discrimination. AI identifies and categorises individuals based on criteria that are not currently protected under the law. Familiar categories such as race, gender, sexual orientation and ability are replaced by groups like dog owners, video gamers, Safari users or “fast scrollers” when AI makes hiring, loan, or insurance decisions.

Professor Sandra Wachter explains why this is important, “Increasingly decisions being made by AI programmes can prevent equal and fair access to basic goods and services such as education, healthcare, housing, or employment. AI systems are now widely used to profile people and make key decisions that impact their lives. Traditional norms and ideas of defining discrimination in law are no longer fit for purpose in the case of AI and I am calling for changes to bring AI within the scope of the law.”

Professor Wachter’s new theory is based on the concept of ‘artificial immutability’. She has identified five features of ‘artificial immutability’ – opacity, vagueness, instability, involuntariness and invisibility –  that contribute towards discrimination. Reconceptualising the law’s envisioned harms is required to assess whether new algorithmic groups offer a normatively and ethically acceptable basis for important decisions. To do so, greater emphasis needs to be placed on whether people have control over decision criteria and whether they are able to achieve important goals and steer their path in life.

Read the full paper, ‘The Theory of Artificial Immutability: Protecting Algorithmic Groups under Anti-Discrimination Law’ by Professor Sandra Wachter.

For more information call +44 (0)1865 287 210 or contact press@oii.ox.ac.uk.

ENDS

Notes for Editors

About the OII

The Oxford Internet Institute (OII) is a multidisciplinary research and teaching department of the University of Oxford, dedicated to the social science of the Internet. Drawing from many different disciplines, the OII works to understand how individual and collective behaviour online shapes our social, economic and political world. Since its founding in 2001, research from the OII has had a significant impact on policy debate, formulation and implementation around the globe, as well as a secondary impact on people’s wellbeing, safety and understanding. Drawing on many different disciplines, the OII takes a combined approach to tackling society’s big questions, with the aim of positively shaping the development of the digital world.

Related Topics