The Oxford Internet Institute’s Professor Sandra Wachter outlines the regulatory loopholes in the EU’s AI legislation, and proposes how these can be closed to address key ethical issues around the development of AI.
Professor Sandra Wachter, Professor of Technology and Regulation at the Oxford Internet Institute, published an essay in the Yale Journal of Law & Technology, entitled “Limitations and Loopholes in the E.U. AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond.”
The essay discusses how tech companies and Member States’ lobbying efforts and political timing pressures succeeded in “watering down” Europe’s Artificial Intelligence Act (AIA). With far-reaching exceptions for both the public and private sector, and an over-reliance on self-assessment and co-regulation, Prof. Wachter asserts that the AIA’s loopholes could have an enormous impact on AI governance and risk in the European Union, the United States, and beyond.
It argues that, despite very laudable efforts by European lawmakers, Europe’s AI regulation—including the AIA, Product Liability Directive (PLD), and the Artificial Intelligence Liability Directive (AILD)—does not sufficiently address key concerns about the development of AI, including discrimination and bias, explainability, hallucinations, misinformation, copyright, data protection, and environmental impact. As it stands, the AIA lacks practical, clear requirements for AI providers and developers, and has weak enforcement mechanisms. Recourse mechanisms focus mainly on material harms and monetary damages while ignoring the immaterial, economic, collective and societal harms of AI, such as discrimination, privacy infringement and pure economic loss.
“We need to think about harm differently. We tend to think more about tangible, material and visible harm such as injury or destruction of property but less so about immaterial and invisible harm. But what you cannot see can still hurt you such as when AI is biased, privacy invasive or is spitting out fabricated facts,” says Prof. Wachter.
She adds: “The AIA features several wide reaching and alarming exemptions that could function as loopholes to evade accountability for AI created harms. It is well established that remote biometric identification and predictive policing systems have abysmal accuracy and can generate racist and sexist results, and that the scientific validity of emotion recognition software is highly contested.”
The AIA itself as well as the accompanying harmonised standards are very much influenced by industry stakeholders. What is more is that the assessment of compliance is also left in the hands of industry. Conformity assessments – the assessment if a high-risk AI product is in accordance with the law – is left to developers.
Finally, it proposes several regulatory mechanisms and policy actions that can help close the identified loopholes, create a system that prevents harmful technology, and to foster ethically and societally beneficial innovation.
The full essay, ‘Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States and Beyond‘, is published as part of the Yale Information Society Project Digital Public Sphere Series, by the Yale Journal of Law & Technology.