Skip down to main content

Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI

With Professor Sandra Wachter
Recorded:
10 Jun 2020
Speakers:
With Professor Sandra Wachter
Filming venue:

Online webinar

Fairness and discrimination in algorithmic systems is globally recognised as a topic of critical importance. To date, a majority of work has started from an American regulatory perspective defined by the notions of ‘disparate treatment’ and ‘disparate impact’. European legal notions of discrimination are not, however, equivalent. In this talk I examine EU law and jurisprudence of the European Court of Justice concerning non-discrimination. I identify a critical incompatibility between European notions of discrimination and existing work on algorithmic and automated fairness. Algorithms are not similarly to human decision-making; they operate at speeds, scale and levels of complexity that defy human understanding, group and act upon classes of people that do not resemble historically protected groups, and do so without potential victims ever being aware of the scope and effects of decision-making. As a result, individuals may never be aware they have been disadvantaged and thus lack a starting point to raise a claim. A clear gap exists between statistical measures of fairness and the context-sensitive, often intuitive and ambiguous discrimination metrics and evidential requirements historically used by the Court.

The talk focuses on three contributions. First, I review the evidential requirements to bring a claim under EU non-discrimination law. Due to the disparate nature of algorithmic and human discrimination, the EU’s current requirements are not fit to be automated. Second, I show that automating fairness or non-discrimination in Europe may be impossible because the law does not provide a static or homogenous framework. Finally, I propose a statistical test as a baseline to identify and assess potential cases of algorithmic discrimination in Europe. Adoption of this statistical test will help push forward academic and policy debates around scalable solutions for fairness and non-discrimination in automated systems in Europe.

Privacy Overview
Oxford Internet Institute

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies
  • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

Google Analytics

This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

Enabling this option will allow cookies from:

  • Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains

These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.