As the UK Government prepares to host a summit of global leaders from government, industry and academia on the theme of AI safety, the Oxford Internet Institute convened a panel of experts to debate the UK’s role in the development and regulation of the technology at Keble College, Oxford.
The panel, chaired by Dr Keegan McBride, Departmental Research Lecturer in AI, Government and Policy at the OII, included representatives with backgrounds and experience in the technology industry, regulation, academia and international affairs.
The UK’s role
Participants discussed how the UK is taking a leadership role by hosting the summit. Kayla Blomquist, an AI governance and geopolitics researcher at the Oxford Internet Institute said “the UK is asserting its convening power, bringing different actors together and filling a genuine gap in international forums.”
Sue Daley, Director of Technology and Innovation at TechUK, talked up the UK’s attractiveness for developing AI businesses. She said “The UK AI sector is thriving, with a combination of global players and UK firms. We are third in the world behind the US and China and that is something to celebrate”.
Panellists felt public engagement was crucial to the successful deployment of the technology. Helen Margetts, Professor of Society and the Internet at the OII and Programme Director for Public Policy at the Alan Turing Institute, said “I worry about the focus on existential risk, which may mean people becoming unnecessarily scared. Of course, we must research possible long-term risks, as we have done successfully for other technologies (such as gene editing), but we must also research how to achieve the potential benefits, and the more immediate dangers, including the possibility of online harms being turbocharged by AI. When you talk to a representative sample of the general public about applications of AI in the real world rather than AI in the abstract, you get an interesting, nuanced and sophisticated view of how they actually experience AI, the benefits they see, and the concerns that they have.” Daley added that ideally in ten years time we are in a world where the average person is looking at things and saying “can’t we just use AI for that?”
The group discussed the benefits and disadvantages of a sectoral approach to developing and regulating AI. Daley said “telecommunications, IT, legal and financial services are leading the way” in deployment but that other sectors need to catch up to help the UK realise the potential of this transformative technology. Karen Croxson, Chief Data and Technology Insights Officer at the Competition and Markets Authority, said there is a need to focus on opportunities as well as potential immediate and longer term harms. Harms could include “false information, fake reviews, AI-enabled fraud and deception, to potential future entrenchment of market power”. Margetts added there is a need to look at regulation both horizontally and vertically, saying “AI is a horizontal technology that gets to everything, there isn’t any market or area of society that won’t be touched by AI. But we also need a vertical, sectoral approach to understand the specific implications of these technologies in policy domains, such as education.”
There was optimism for the future in terms of the UK’s role. Blomquist said “the UK’s education system is a massive draw for international talent”, Croxson said regulators including via the Digital Regulation Cooperation Forum are working proactively to “ensure the conditions for sustained, positive innovation” in the UK. Margetts, who is attending the UK Government’s AI Safety Summit, added that in ten years she hopes that we will have developed the multidisciplinary research evidence, including social science research, that help us to understand – and improve -the relationship between AI and society.”
How do people feel about AI?
Common Regulatory Capacity for AI | The Alan Turing Institute.
Rebalancing Innovation: Women, AI and Venture Capital in the UK | The Alan Turing Institute
A pro-innovation approach to AI regulation – GOV.UK
Making AI work for Britain
AI Foundation Models: initial review – GOV.UK from the CMA