Skip down to main content

Professor Helen Margetts shares her insights on the AI Safety Summit

Professor
Helen Margetts shares her insights on the AI Safety Summit

Published on
15 Nov 2023
Written by
Helen Margetts
As the media frenzy surrounding the AI Safety Summit subsides, Helen Margetts, Professor of Society and the Internet, Oxford Internet Institute, shares her reflections from attending the Summit and considers its long-term impact on AI policy.

As the media frenzy surrounding the AI Safety Summit subsides, Helen Margetts, Professor of Society and the Internet, Oxford Internet Institute, shares her reflections from attending the Summit and considers its long-term impact on AI policy.

Although the location made for a difficult day for people without helicopters, I was pleased to attend the first day of the AI Summit itself in Bletchley Park. There have been a great many Summit-related events – the prologue, official fringe, fringe of the fringe and post-fringe –  in the weeks leading up the Summit, drawing in a huge range of people and organisations. As one of the very few academics attending the actual Summit, I felt as if there were many important perspectives to represent.

The narrow focus of the Summit on existential risks of so-called ‘frontier’ AI had initially depressed my expectations. At both the organisations where I work – the Oxford Internet Institute at the University of Oxford, and the Public Policy Programme of the Alan Turing Institute – I research how to maximise the opportunities of AI to improve governance and democracy, developing frameworks for responsible innovation, minimizing risks and keeping citizens safe. These are issues that seem most likely to be turbo-charged by frontier AI.

However, the day did feel important. Why – and what will go forward?

First, the range of bodies gathered in Bletchley Park was truly impressive  – from international organisations, companies from across the tech landscape and a big range of national governments. To have high-level representatives from the US and China (as well as India, Korea, Nigeria, and the European Union) speak on the plenary stage in the morning felt like a moment.

Second, the roundtables, of 35 delegates each, seemed well-chaired and generated some meaningful discussion. There was consensus that international co-operation, and convergence were essential. Although not always explicit, there was an emphasis on standardization – of internationally comparative measurement (how do we measure model performance, capability, risk?); safety testing itself (redteaming, algorithmic auditing and so on); and safety standards themselves. All will require standards development across countries and companies.  Consensus only went so far though – people made distinct cases for one of at least eight international organisations (including the UN, UNESCO, the OECD and GPAI) to take the lead. Real international co-operation will mean making choices.

Third, like any elite event, there were useful micro-level connections to be made. Because of the seniority of the people there, bilateral discussions made it possible to deepen or raise up the agenda nascent collaborations between policymakers, firms, experts and civil society organisations. If I was having such conversations, they were surely mirrored across the Summit. It felt as if the event played a role in consolidating the landscape.

There were jarring notes – especially the repeated and contestable statement in the plenary sessions that ‘science fiction has become science fact’. I hope history will not count the discussion with the Prime Minister and Elon Musk as part of the Summit. As a political scientist, I was uncomfortable that a head of state should be asking questions of a tech mogul and not the other way around. Brilliant he may be at technological development, Musk has shown himself to lack skills in understanding the social world – ill befitting him to tackle the central question of the Summit – how will frontier AI affect all of us?

Overall, there is a feeling of movement in a positive direction. The extensive international agreement to co-operate on model testing, and the announcement of Safety Institutes across countries, all with the involvement of governments as well as the companies going forward, represents definite progress. Regardless of your views on existential risk, we need to develop research into its likelihood and impact, just as we have done for previous controversial technologies.

The announcement of the US Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence the day before was seen by some as upstaging the Summit – but it added to the general feeling of momentum and multistakeholder engagement on AI. Its wider focus on all types of AI, and different categories of risk, including those more immediate risks that concern people already – such as online safety, bias and fairness, transparency and accountability in governance – added some momentum. The location of the US Safety Institute in NIST provides a much-needed explicit link with standards.

There is a lot of groundwork that has already been done on these issues, much of it discussed at the Summit and presented at Summit- associated events in the run-up – so there are foundations to build on going forward.

As Alan Turing himself said ‘We can only see a short way ahead, but we see plenty that needs to be done’.

Find out more about the work of Professor Helen Margetts, Professor of Society and the Internet, Oxford Internet Institute and Professorial Fellow at Mansfield College, University of Oxford.  Prof Margetts also set up (in 2018) and is Director of the Public Policy Programme at The Alan Turing Institute.

Find out more about how the OII is leading the debate on AI and in particular, Generative AI, which has been a key area of interest for OII faculty, researchers and students for many years.