Skip down to main content

The EU and the US: two different approaches to AI governance

The EU and the US: two different approaches to AI governance

Published on
15 Nov 2021
Written by
Huw Roberts and Luciano Floridi
In this new blog, OII alumnus Huw Roberts and Professor Luciano Floridi, Oxford Internet Institute, University of Oxford compare the EU and US approaches to AI governance and consider the implications for future collaboration.

The development and use of artificial intelligence (AI) has proliferated in recent years. AI has improved the recommendations we see when looking for a movie to stream, led to scientific breakthroughs in protein structure research, and helped tackle deforestation in the Amazon. However, AI systems also pose significant risks. They have facilitated the systematic discrimination of women in hiring decisions, enhanced state surveillance capabilities, and led to more potent offensive cyber capabilities. Because of these developments – both positive and negative – it is unsurprising that governments around the world have started to think more seriously about AI governance.

The EU and the US are two of the most important players in this space; both are internationally leading in the development and use of AI technologies, and their efforts to govern AI technologies will have global influence. In our recent paper, published in Science and Engineering Ethics, we take a deep dive into the AI governance approaches of the EU and US, considering the progress that each has made and assess the likelihood of transatlantic cooperation in the future. Perhaps unsurprisingly, we find that the EU and US have been taking highly divergent strategies to maximise the opportunities and minimise the risks of AI.

Policy developments in the EU and US

In recent months, the EU Commission has been making headlines with the publication of its proposed ‘AI Act’. This act builds upon previous soft law initiatives and outlines a risk-based framework for governing AI. Systems deemed to be of unacceptable risk, such as manipulative or social scoring systems, will be prohibited; high-risk systems that could have an adverse impact on safety or fundamental rights are subject to a number of specific governance requirements; finally, limited risk systems will be subjected to transparency requirements, whilst minimal risk systems are encouraged to follow codes of conduct respectively. This risk-based framework and preceding governance initiatives seek to promote European values through ensuring that fundamental rights are protected, economic outcomes are improved, and social disruption is minimised.

In the US, there has been greater hesitancy in introducing legal restrictions comparable to those of the EU. Indeed, during the Trump administration, government agencies were dissuaded from introducing new regulatory measures for fear that these would hamper innovation. From early 2021, the steady introduction of limited governance can be seen. The National AI Initiative Act of 2020, passed in January 2021, mandates the establishment of a number of bodies to provide federal-level guidance, the most notable of which is the ‘National Artificial Intelligence Initiative Office’: a body tasked with supporting AI R&D, educational initiatives, interagency planning and international cooperation. Other, smaller initiatives are also being pursued by federal bodies, such as the development of a voluntary AI risk framework by National Institute of Standards and Technology (NIST). Overall, the US approach is characterised by an emphasis on promoting innovation to maintain US global leadership in AI, with repurposing of existing law and introducing soft law currently favoured for governance.

Comparing ethical outcomes and looking to the future

These differing priorities – protecting fundamental rights in the EU and preserving international competitiveness in the US –  are consequential for ethical outcomes. Whilst improvable, the EU’s proposed AI Act goes a long way for ensuring that meaningful protections are in place for citizens from potentially harmful uses of AI. In contrast, the US’s soft law approach offers little.  Relying on companies to self-regulate AI has failed to ensure ethical outcomes in the past and seems unlikely to do so in the future. Consider the recent firing of Timnit Gebru and Margaret Mitchell from Google’s AI ethics team; accounts are contested over the exact reason for termination, yet what is clear is that Google’s attempt to censor AI ethics research related to integral elements of its business model is at the heart of the debate. This episode highlights the tensions between a drive for profit and self-regulation.

Looking ahead, the outlook for the ethical governance of AI is not necessarily all rosy for the EU and bleak for the US. The EU AI Act faces many challenges before passing into law, which could take years. Most notably, it is likely to be heavily lobbied by American technology companies, who have become the biggest spenders in Brussels. In its response to the EU Commission’s consultation on the AI Act, Facebook stated that many of the requirements stipulated are not feasible. This response is likely indicative of forthcoming efforts to water down provisions. At the same time, promising measures have been introduced by some US cities. In New York, lawmakers have been discussing a proposal for algorithmic auditing requirements for systems used in recruitment. A number of other cities have also banned facial recognition technology. A concerted push to move some of these promising local provisions to a federal level could see the US close the gap on the EU for ethical AI governance.

Related Topics