A new report from the Tony Blair Institute, co-authored by leading AI figures including two Oxford Internet Institute researchers, has set out recommendations for government ahead of a future ‘AI Bill’, including maintaining a sector-specific approach to regulation and the independence of the AI Safety Institute.
The report was co-authored by:
-
- Helen Margetts (Professor of Society and the Internet, Oxford Internet Institute, and Director of the Public Policy Programme, The Alan Turing Institute)
- Robert Trager, (Co-director of the Oxford Martin AI Governance Initiative)
- Keegan McBride (Departmental Research Lecturer, Oxford Internet Institute)
- Nitarshan Rajkumar (AI researcher, University of Cambridge and former Senior Advisor to the Secretary of State for DSIT)
- Jakob Mökander (Director of Science Policy, TBI) and Marie Teo (Senior Advisor, Global Government Engagement, TBI)
As the government drafts an ‘AI Bill’, leading experts from industry, academia and civil society stress the importance of getting the details right to promote innovation and advance UK’s global leadership in AI safety.
‘Getting the UK’s Legislative Strategy for AI Right’, published today by the Tony Blair Institute for Global Change, co-authored by Jakob Mökander, Helen Margetts, Robert Trager, Keegan McBride, Nitarshan Rajkumar and Marie Teo, provides recommendations to ensure the UK’s safety in the face of rapid AI development, and its place as a global leader in the technology.
To date, the UK has adopted a pro-innovation and sector-specific approach to AI. Many potential harms that an AI regulator would be responsible for, from privacy to misinformation, already fall under the remit of existing regulators such as the ICO and Ofcom. However, these regulators frequently lack the resource, expertise and power to address AI-specific risks.
Getting the UK’s AI legislation right will be critical for the country’s long-term security and prosperity. Yet given the speed and uncertainty of the technology’s development, the report argues, the government should develop a comprehensive legislative strategy for AI that addresses these gaps in current regulatory infrastructure, including by allocating funding in the next Budget, rather than creating a new cross-cutting AI regulator.
Through this, the government can both avoid stifling innovation with inflexible ‘too much, too soon’ regulation, without being left behind while it sets up a new agency.
The report also notes that the Government should resist calls for the AI Safety Institute to become a regulator, and that it instead should be an independent arms-length body with a focus on advancing scientific understanding.
Alongside its co-authors, the report was developed in collaboration with top AI innovators and experts Jack Clark (Co-founder, Anthropic), Gina Neff (Deputy CEO, Responsible AI UK), Owen Larter (Director of Public Policy, Microsoft), Alexander Babuta (Director, CETaS, Director of National Security and Policy at The Alan Turing Institute), Markus Anderljung (Head of Policy, Centre for the Governance of AI), Max Fenkell (Head of Government Relations, Scale AI), Rebecca Stimson (Director of Public Policy, Meta) and Mihir Kshirsagar (Policy Clinic Lead, Princeton Centre for Information Technology Policy).
Read the report here.
Helen Margetts, Professor of Society and the Internet, Oxford Internet Institute, Director of the Public Policy Programme at The Alan Turing Institute, and co-author of the report, said:
“The UK already has a lot of strength in its existing regulators. Now, the government must advance AI readiness across this regulatory landscape, by equipping regulators with the resources to tackle AI’s complex challenges and developing common capacity and expertise in AI for regulation.
“In doing so, the UK can leverage a unique opportunity to lead the global conversation on AI governance.”
TBI’s Director of Science and Technology Policy, Jakob Mökander, said:
“The government is currently preparing a bill that focuses narrowly on frontier AI safety. This is a good first step, but we must go further. The UK needs to develop an overarching legislative strategy for AI to improve transparency and accountability throughout AI supply chains, remain an attractive location for AI research and innovation, and build on its global leadership in AI safety.”
Jack Clark, Co-founder of Anthropic and contributor to the report, said:
“As the AI industry works to establish best practices for testing and partnering with the government, any new legislation from the UK should take into account the need for flexibility for a fast moving technology. The UK has built an incredible national asset with the UK AI Safety Institute and should use it to push forward the science of AI safety.”
Other recommendations in the report include that the UK should build on its global leadership to promote greater international alignment on AI regulation, and that any binding regulations should be accompanied by incentives for relevant actors to comply. Read more here.
Editor’s note: Due to the large number of contributors, co-authorship does not imply agreement with every point made in the position paper.