Skip down to main content

Behind AI, a massive infrastructure is changing geopolitics

Behind AI, a massive infrastructure is changing geopolitics

Published on
17 Mar 2023
Written by
Vili Lehdonvirta
The computing power needed to research and develop large AI models can be out of reach even for medium-sized states. Professor Vili Lehdonvirta, Oxford Internet Institute, explores the impact this has.

AI technologies have many important applications ranging from industry and civil service to military and intelligence, and as such they can strengthen a country’s economy and improve its state capacity.

But developing and operating frontier AI technologies requires considerable material infrastructures, which can have more complicated geopolitical implications. The amount of computing power or compute required to train ChatGPT is such that it would take a single graphics processing unit (GPU) of the type you find in an ordinary high-end computer about 288 years to finish a training run.

To make any sort of progress, frontier AI developers use many GPUs in parallel. OpenAI is said to have used 10,000 GPUs to develop ChatGPT, reducing the training time from centuries to days. So where do you find thousands of GPUs set up for AI research? Possibly just two or three places in the UK: in hyperscale data centres owned by Amazon, Google, and Microsoft.

To do any sort of serious research on large AI models, you have to send your data to one of these so-called hyperscalers, and either pay them or partner up with them. And once you’ve trained your model, to actually deploy it into use so that it can be used by many people simultaneously, you may need even more compute.

As a result, pretty much all independent AI research labs, even ones explicitly established as an alternative to the hyperscalers, have eventually fallen right into the hyperscalers’ lap: OpenAI to Microsoft, DeepMind and Anthropic to Google, Hugging Face to Amazon, and so on (I am grateful to my student Divya Siddarth for this observation).

So there is a concern that the more we lean into AI as a society, the more dependent we become for both research and development (R&D) and for daily operations on infrastructures owned by a handful of hyperscale corporations domiciled abroad. It also means that on a geopolitical level, the UK may lack the autonomy to steer the development of frontier AI in a manner aligned with its values and objectives.

Many academic AI researchers deem this situation unacceptable and are calling for intervention from the government. Professor Michael Woolridge of the Alan Turing Institute and Oxford University is calling for a sovereign high-performance cloud infrastructure. Earlier this month the UK government’s Independent Review of The Future of Compute published an excellent final report, which calls for the government to immediately establish a UK AI Research Resource incorporating at least 3,000 top-spec GPUs.

However, other governments have tried something like this before, and the results have not been entirely encouraging. France launched its first sovereign cloud project in 2009. But it never won widespread adoption and in 2020 it was shut down as a failure. In 2021 the French government announced a new “trusted cloud” initiative, delivered by Thales and Orange Telecom – but in partnership with Google and Microsoft. Germany’s new “sovereign cloud” is likewise delivered by local firms in partnership with Google and Microsoft. Nominally these national clouds are controlled by the local firms, but in practice they remain part of the technological supply chains of the U.S. hyperscalers.

I think it’s very important to ask why efforts to build and sustain sovereign cloud infrastructure have proven so difficult. My working hypothesis is the following. Like any infrastructure, cloud infrastructure requires sufficient scale to be viable. But for cloud computing, this efficient scale is extraordinarily high.

One Amazon data centre contains tens of thousands of servers, and the total number of servers that make up Amazon’s global computing platform are measured in the millions. This means that costs such as cybersecurity, administration, and R&D are divided between millions of clients, enabling Amazon to develop and sustain a great variety of advanced services on top of the basic computing hardware, all of which benefit AI research.

If we build a sovereign cloud in the UK, can we sustain the scale of investment required to keep it up to date? Or even to reach this level in the first place? Google alone is able to invest about twice as much money into R&D each year as the entire UK public sector. So it is in fact conceivable that hyperscale cloud computing is a type of infrastructure in which the efficient scale is so extraordinarily high that it exceeds what a medium-sized state alone can sustain.

If this is the case, then two different geopolitical paths may be possible:

– One is that the UK collaborates with other countries, such as France and Germany, to reach the critical mass required to sustain cutting-edge cloud in public ownership.

– Another is that the UK simply doubles down on using the U.S. hyperscalers and attempts to manage that dependence on a political level. After all, the UK is already dependent on the U.S. in various other ways. Even GCHQ and MI5 reportedly store their secrets on Amazon’s platform.

But to ascertain whether these are really the paths available, or whether a sovereign UK-only infrastructure is after all possible, we need to urgently pursue more research on the geography and economics of AI and the cloud computing infrastructures that power it.

 

—–

Professor Lehdonvirta’s new book “Cloud Empires: How Digital Platforms Are Overtaking the State and How We Can Regain Control” examines how large technology companies are complementing but also challenging and in some cases overtaking state power and sovereignty.

 

Related Topics