Skip down to main content

The global implications of the limitations and loopholes of the E.U. AI Act’s and liability directives, according to an Oxford researcher

Published on
19 Aug 2024
Written by
Sandra Wachter
Professor Sandra Wachter highlights the regulatory loopholes in EU AI legislation and proposes how these can be closed to address key ethical issues around the development of AI.
Brussels EU HQ

The Oxford Internet Institute’s Professor Sandra Wachter outlines the regulatory loopholes in the EU’s AI legislation, and proposes how these can be closed to address key ethical issues around the development of AI.

Professor Sandra Wachter, Professor of Technology and Regulation at the Oxford Internet Institute, published an essay in the Yale Journal of Law & Technology, entitled “Limitations and Loopholes in the E.U. AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond.”

The essay discusses how tech companies and Member States’ lobbying efforts and political timing pressures succeeded in “watering down” Europe’s Artificial Intelligence Act (AIA). With far-reaching exceptions for both the public and private sector, and an over-reliance on self-assessment and co-regulation, Prof. Wachter asserts that the AIA’s loopholes could have an enormous impact on AI governance and risk in the European Union, the United States, and beyond.

It argues that, despite very laudable efforts by European lawmakers, Europe’s AI regulation—including the AIA, Product Liability Directive (PLD), and the Artificial Intelligence Liability Directive (AILD)—does not sufficiently address key concerns about the development of AI, including discrimination and bias, explainability, hallucinations, misinformation, copyright, data protection, and environmental impact. As it stands, the AIA lacks practical, clear requirements for AI providers and developers, and has weak enforcement mechanisms. Recourse mechanisms focus mainly on material harms and monetary damages while ignoring the immaterial, economic, collective and societal harms of AI, such as discrimination, privacy infringement and pure economic loss.

“We need to think about harm differently. We tend to think more about tangible, material and visible harm such as injury or destruction of property but less so about immaterial and invisible harm. But what you cannot see can still hurt you such as when AI is biased, privacy invasive or is spitting out fabricated facts,” says Prof. Wachter.

She adds: “The AIA features several wide reaching and alarming exemptions that could function as loopholes to evade accountability for AI created harms. It is well established that remote biometric identification and predictive policing systems have abysmal accuracy and can generate racist and sexist results, and that the scientific validity of emotion recognition software is highly contested.”

The AIA itself as well as the accompanying harmonised standards are very much influenced by industry stakeholders. What is more is that the assessment of compliance is also left in the hands of industry. Conformity assessments – the assessment if a high-risk AI product is in accordance with the law – is left to developers.

Finally, it proposes several regulatory mechanisms and policy actions that can help close the identified loopholes, create a system that prevents harmful technology, and to foster ethically and societally beneficial innovation.

The full essay, ‘Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States and Beyond‘, is published as part of the Yale Information Society Project Digital Public Sphere Series, by the Yale Journal of Law & Technology.

Related Topics:

Privacy Overview
Oxford Internet Institute

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies
  • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

Google Analytics

This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

Enabling this option will allow cookies from:

  • Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains

These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.