Skip down to main content

AI regulation in the UK: it’s time to act now

AI regulation in the UK: it’s time to act now

Published on
4 Apr 2023
Written by
Huw Roberts and Luciano Floridi
Oxford Internet Institute researcher Huw Roberts and Professor Luciano Floridi explain the challenges for AI regulation in the UK and outline the case for stronger regulatory protections for UK citizens.

Oxford Internet Institute researcher Huw Roberts and Professor Luciano Floridi explain the challenges for AI regulation in the UK and outline the case for stronger regulatory protections for UK citizens.

Last week, Elon Musk and more than 1000 other tech researchers and executives called for a six month “pause” on the development of advanced AI capabilities. They fear that we are currently experiencing an “arms race” between AI labs that has a risk of spiralling out of control. And that without appropriate controls, this rapid development could lead to serious societal disruption.

Here in the UK, the Government has a very different perspective. The AI Regulation White Paper – also published last week – outlines the UK’s plans to use AI regulation to “turbocharge growth” and establish the UK as a “global AI superpower”. The policy document emphasises that responsibility for regulating AI should be left to individual regulators, who will address AI risks specific to their remits. The catch? The policy document proposes no new regulation or statutory powers to support regulators in achieving this.

This sector-led approach to AI governance marks a continuation of the contextual strategy first outlined by the Government in 2018, which stated that “a blanket AI-specific regulation, at this stage, would be inappropriate”. However, to mitigate the risk of regulators introducing disjointed AI governance initiatives, the White Paper establishes a set of responsible AI principles that anchor initiatives and proposes a Central Government Risk Function designed to promote coordination between regulators. Even with these amendments, the UK approach contrasts to the cross-cutting AI-specific regulations recently proposed in other jurisdictions, including the European Union, Brazil, and Canada.

Over the past year, we have been undertaking a contextualised analysis of the UK’s sector-led approach to AI governance and believe that there are many positive aspects. It would be ineffective and inefficient to have one regulator addressing the components of a medical device and another regulating the AI system embedded in the equipment. Assigning responsibility for AI to existing regulatory remits also allows for sector-specific expertise to be applied in the contexts that AI is developed and deployed. Over time, as more products and services integrate AI, the strengths of a sector-led approach will become apparent.

However, notable shortcomings and risks remain. There is a fear from Government that any new legislative requirements could “hold back innovation”, leaving regulators to rely on existing powers to govern AI. When considered in light of the deregulatory trend taking place in the UK, such an approach becomes problematic. In an effort to promote innovation, UK data protection rules are being weakened and efforts to provide the country’s competition regulator with new powers to address the anti-competitive advantages of Big Tech companies have been severely delayed. More broadly, the UK’s post-Brexit “bonfire” will lead to around 4000 pieces of EU-derived law being repealed or amended by the end of the year. All this creates significant ambiguity for companies trying to be compliant, and for regulators trying to introduce and enforce protections.

The resistance to providing new powers for regulators or introducing cross-cutting hard law for AI is particularly troublesome at a time of rapid advances in general-purpose AI, such as the large language model (LLM), GPT-4. Numerous companies, across sectors have begun integrating these technologies into their products and services, with little in the way of meaningful oversight or guidance. Take the case of Microsoft: in February it integrated Open AI’s LLM ChatGPT into its search function, Bing. In March, it laid off its entire AI ethics and society team that taught employees how to make AI tools responsibly. Given that LLMs will cut across sectors, the proliferation of these systems represents an immediate challenge to the resourcing, powers, and coordination of UK regulators.

Now is the time for the UK to introduce stronger regulatory protections for citizens. At a minimum, the Government should immediately move to place the AI principles outlined in the White Paper on a statutory footing. This would provide regulators with enhanced powers to address the risks of AI, and place an onus on them to consider AI as a regulatory priority.  There is no justifiable reason to delay providing regulators with these powers. This is particularly true given the slow pace of regulation in the UK, with the publication of the AI Regulation White Paper itself delayed for a year.

While these powers are being legislated for, UK regulators should turn their attention to emergent challenges, especially from LLMs and other general-purpose AI. A first step in this respect is clarifying how existing UK law applies to these technologies. The UK’s data protection authority and medical regulator have produced initial guidance, but more work is needed from a wider range of bodies. Beyond this, regulators should reflect on whether a sectoral approach is adequate for addressing the risks of certain general-purpose AI. Cross-cutting upstream risks from LLMs relating to competition and data protection, combined with downstream risks relating to disinformation and cybersecurity, are challenges that individual regulators may be ill-equipped to address. While the Government’s Central Risk Function is being established, the Digital Regulation Cooperation Forum – a body established to promote formal collaboration between key UK digital regulators – should take the lead in assessing whether additional technology-specific regulation is required.

A failure to act now risks undermining positive outcomes for citizens, clarity for companies, and even the regulatory autonomy of the UK, with the ambiguity left being filled by guidance from elsewhere, including the EU.

-ends-

NB: This blog elaborates on an open letter by Huw Roberts published in the Financial Times, 3rd April 2023.

Related Topics