The United Kingdom is organising an Artificial Intelligence Safety Summit at Bletchley Park in November. The summit aims to develop a shared global understanding on the risks that may emerge from new powerful Foundation Models (FM) and develop initial steps towards a global approach to manage these risks. Though the Summit is generating a large amount of interest, it has also been the target of much criticism.
Dr. Keegan McBride, Departmental Research Lecturer in AI, Government, and Policy argues that though it is important to understand the risks and opportunities associated with increasingly advanced AI-based systems, the upcoming Summit is unlikely to lead to any meaningful or long-lasting international commitments.
There are several reasons I believe that the summit will fail to deliver, but three reasons stand above the rest.
- First, the AI Safety Summit and the UK’s current AI strategy is primarily concerned with existential risks. The interest in this topic has received a large amount of attention in the press, with many leading technologists signing a letter calling for a pause on the development of AI systems out of fear that, unchecked, AI development could bring about the end of civilization as we know it. This viewpoint is strongly supported by the Effective Altruism (EA) community. The views supported and advanced by the EA movement have found strong support in the UK amongst Prime Minister Sunak and his advisors.
While it is certainly true that new FMs can empower bad actors, the idea that AI will bring about the end of the world in the near future is not grounded in reality. Ultimately, the influence that a small group of organizations and individuals have been able to exert on the shape and direction of the summit has led many to doubt and question the UK’s interests in and ability to develop a global regulatory regime on AI.
- Second, while the AI Safety Summit aims to be global in nature, it ignores the existing mechanisms in place that are working to develop global approaches to the regulation of AI, such as the Global Partnership on AI. The effectiveness of such initiatives have been limited to-date. There is no reason to expect that the UK AI Safety Summit will lead to a different result.
A recent article in the magazine Foreign Policy titled “Every Country Is on Its Own on AI” discusses why this is the case. In it, the authors argue that the “current geopolitical conditions are also unusually hostile to building a new control regime to deal with AI hazards”. The rapid pace of advancement and innovation of FMs, the desire of state actors to use AI for their own strategic advantages, and the current geopolitical environment all lead to a situation where global agreement on AI is unlikely to occur.
Looking beyond the issue of whether global regulation of AI will actually ever be successful, the AI Safety Summit is further limited by two factors. Most importantly, the focus of the Summit only on risks of Misuse and Loss of Control related to FMs artificially limits the relevance of the Summit.
- Third, the on-the-ground reality is that – today – FMs still cost incredibly large amounts of money and time to train, access to a large amount of talent, and a vast amount of computational power. There are only several actors(both states and companies) in the world that can build and release such models, with a large majority being based in either the United States or China.
When it comes to FMs, the United Kingdom is a minor player further limiting the country’s ability to develop a global consensus on the regulation of FMs – it is essential for the United States to be on board. Unfortunately, at a time of heightened geopolitical tensions with China, the inclusion of China at the summit has played a role in President Biden’s probable absence from the Summit, further weakening the Summit’s potential for impact and relevance.
Additionally, in the case of the United States, the White House has already received voluntary commitments from leading AI companies to minimize exactly the sorts of risks that the AI Safety Summit focuses on. For example, there is an agreed upon commitment to the “internal and external security testing of AI systems before their release….” and this testing will protect “…against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.” This raises the question of what is the Summit actually hoping to achieve?
What we are left with is a picture of an event that has little chance of influencing the broader global discussions on what is an incredibly important topic. This is unfortunate as there is a need to discuss the ways in which AI is being used today.
Find out more about Dr Keegan McBride.