With AI suddenly the buzzword that everyone is talking about, there have been several developments at the geopolitical level relating to AI and its regulation – especially in the UK. Yet, there is still a large amount of confusion in this space.
Here at the Oxford Internet Institute, University of Oxford we organised a series of events on AI, regulation, geopolitics, and digital transformation of the public sector this year. In my latest blog, I try to address some of this confusion and present my main thoughts and key takeaways from the OII events and discussions with academics, government employees, and policymakers.
Geopolitical risks of AI
Let’s start with the biggest narrative today. Will AI bring about the end of the world? No, AI will not bring about the end of the world. However, AI does improve the overall efficiency and effectiveness of systems. What this means is that actors who do bad things – be it people or countries – can do bad things more efficiently and effectively than ever before. At the country level, this is something we have seen before, and which has been playing out for decades, with the internet. The internet was initially viewed as a tool of liberation , but studies have since shown that this is far from the truth and rather the opposite is true. Authoritarian regimes have higher rates of internet penetration than less authoritarian regimes as it improves the ability of the governing powers to control the population – it is an authoritarian ruler’s dream come true [2, 3]. Unsurprisingly, countries like Russia and China are trying to take this to their model to the world by effectively utilising existing global internet governance forums to craft a more authoritarian future for the internet . This has not yet happened, but the processes are taking place today. A same fate awaits AI. It is, and will continue to be, essential that the West – collectively – is able to present a unified front against a potential illiberal, authoritarian, and undemocratic future for AI. We must ensure that these systems are imbued with our core values and beliefs.
This brings us to the next question. Is AI currently unregulated? If you follow the news or discussions in the media and policy circles you would expect the answer to be yes. However, this is not true. The foundations are already there. As we point out in our book chapter on the transnational regulation of AI, “what is being discussed is whether the existing framework needs optimising” . The rules already exist. We have rules that, for example, say stealing intellectual property, causing loss of life, or discriminating against individuals on protected characteristics must not be done and you can be punished if you do otherwise. The questions we must be asking instead are: 1) how we can empower our existing regulators to conduct regulation in their area of expertise in as effective a way as possible? and 2) how can we enable our regulators to develop new skills and capacities to maintain or improve their ability to regulate well in the age of AI. In this regard, the UK’s proposed approach to the regulation of AI by “leveraging existing activities and expertise from across the broader economy” is absolutely the way to go about this . The creation of new agencies specifically tasked with the monitoring, inspection, or regulation of AI systems does not make sense – this is especially true in the context of proposals such as global “IAEA” like institutions.
Political power and AI
Is this growing political interest in AI really just about AI? Not exactly. The current interest in AI is representative of a much larger transformation related to the nature of power in the digital age. AI is a power amplifier, it allows for large amounts of information to be managed, digested, and utilised in ways not possible before. The country or government that can utilise AI the best will see large economic growth, labour growth, productivity growth, amongst other clear benefits. Therefore, we can expect to see (and already are seeing) increasing interest at the geopolitical level related to strategic control and development of the infrastructure underpinning AI such as chip manufacturing, data centres for storage and processing, data generation capabilities, space-based internet infrastructure, and submarine cable infrastructure amongst others .
Use of AI in the public sector
There are often questions about whether or not AI will transform our governments. The answer is probably yes, but perhaps not in the way most people expect. In order to gain advantages offered by AI, governments will increasingly attempt to integrate AI into – or create new – business processes. However, it is important to understand what increasing usage of AI in and by governments actually means. While at the governmental level there is certainly interest in citizen-facing applications and AI use cases, the biggest net-gains for the public sector will be internally focused. There will be rapidly increasing usage of AI-based systems by governmental agencies – especially in the defence sector, but not only – to improve their own effectiveness and efficiency. However, this will require new investments into digital infrastructure, digital identities, interoperability platforms, and data management systems at a minimum.
The way ahead
Ultimately, AI is a bureaucracy reinforcer, not a bureaucracy dismantler. AI will not make the state or bureaucracy go away, they will only get stronger and more effective. In order to drive this transformation, there will be increased integration between the private and public sectors as governments attempt to maintain a competitive advantage in the AI-industry. AI-systems will be procured, built, and trained by private sector companies, which will be hosted on private sector infrastructure, but will increasingly rely on private and governmentally-held and curated data sources.
AI is already here. It is not possible to turn back time. We must learn to live with it and understand what it can and cannot do.
 Diamond, L. (2010). Liberation technology. J. Democracy, 21, 69.
 Deibert, R. (2015). Authoritarianism goes global: Cyberspace under siege. Journal of Democracy, 26(3), 64-78.
 Rød, E. G., & Weidmann, N. B. (2015). Empowering activists or autocrats? The Internet in authoritarian regimes. Journal of Peace Research, 52(3), 338-351.
 Daniëlle Flonk (2021). Emerging illiberal norms: Russia and China as promoters of internet content control. International Affairs, 97(6), 1925-1944.
 Dempsey, M., McBride, K., Haataja, M., & Bryson, J. (2022). Transnational Digital Governance and Its Impact on Artificial Intelligence. In The Oxford Handbook of AI Governance. Oxford University Press.
 See the white paper on this topic: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper
 See this great blog post by Vili Lehdonvirta on the geopolitics of digital infrastructure: https://www.oii.ox.ac.uk/news-events/news/behind-ai-a-massive-infrastructure-is-changing-geopolitics/
About the author
Dr Keegan McBride is a Departmental Lecturer in AI, Government and Policy. He is also Course Director, MSc Social Science of the Internet, Oxford Internet Institute, University of Oxford. His recent works include a co-authored chapter in The Oxford Handbook of AI Governance. Read the chapter, ‘Transnational Digital Governance and Its Impact on Artificial Intelligence’.