Johanna Ballesteros
Research Associate
Johanna Ballesteros works at the intersection of tech, policy, and politics, building digital products and innovation ecosystems. She holds an MSc from LSE.
As Europe doubles down on its digital sovereignty agenda, the question of what kind we are actually trying to build emerges. By technological sovereignty, we mean the ability to make meaningful choices about the technologies we rely on, from cloud infrastructure and semiconductors to AI, without being locked into dependencies that constrain those choices. This is not a case for technological isolationism, nor does the argument depend on settling every definitional dispute around the term.
What matters more is a distinction that is often overlooked: the difference between present technological sovereignty and future technological sovereignty. Present sovereignty means securing what already exists, while future sovereignty means building what does not yet exist.
This distinction matters because technological change is now moving faster than the adaptive capacity of states. Building resilience into today’s systems is one thing, but developing the capabilities to shape those of tomorrow is another. For example, if Europe had invested more consistently in future sovereignty a decade ago, would we still be debating cloud dependence today?
AWS, Microsoft Azure, and Google Cloud now account for around 65-70% of European cloud infrastructure, according to Synergy Research Group, and that concentration has deepened over the past five years. The European Investment Bank reports that the EU invests roughly 2.2% of GDP in R&D, compared to 3.5% in the United States and 4.9% in South Korea. Meanwhile, the European Patent Office shows that Europe’s share of global AI patents declined between 2018 and 2023 relative to both the US and China.
In this way, present sovereignty is more like a report card on past choices. The productive question to ask is: what would we have done differently if we had not taken access to key inputs for granted?
Strategic Necessity as Science Policy
One way to make such technological dependencies visible earlier is to assume their absence in science policy design, ensuring teams can demonstrate viable alternatives in the event that specified external inputs are unavailable. A virtual export restriction, for example, could operate as a funding condition, requiring teams to demonstrate how their systems would function if access to a key external model, chip supply, or source of compute became unavailable. Necessity fosters invention, and invention underpins sovereignty.
The case for this kind of mission-oriented public investment is well established. Mariana Mazzucato’s work, particularly through the UCL Institute for Innovation and Public Purpose, has documented how the most consequential technologies, from GPS to touchscreens to the internet itself, emerged from public investment programmes that set ambitious objectives and tolerated high failure rates. The question is whether Europe can design programmes with that character, rather than defaulting to the risk-averse co-funding models that dominate today.
What if part of that spending were redirected into R&D programmes built around structured necessity? Even a modest amount of funding could support projects that operate under explicit constraints, helping to reduce Europe’s long-run dependence. SPRIND’s Next Frontier AI Initiative is one example.
Constraints do not automatically produce innovation: in many cases, restrictions simply force substitution with worse alternatives. The space race translated geopolitical constraint into sustained research investment and compounding capacity over decades. More recently, DeepSeek developed a frontier-competitive AI system under tighter compute access, driven by innovations in training efficiency.
This is what matters for science policy design. We should catalyse invention through strategic necessity, while preserving the conditions that allow new capabilities to scale.
From Presentism to Leapfrogging
Innovation systems are evolutionary environments, and evolution depends on selection. Carlota Perez’s work is instructive here: windows of opportunity in new technological paradigms are real but short. Europe has missed several, and the question is whether it can recognise the ones that remain open.
This is why an exclusive focus on reproducing the current technological stack is risky. We might call it presentism: spending resources to catch up on yesterday’s frontier. A more ambitious strategy is to leapfrog by placing disciplined bets where trajectories are not yet locked in, such as in scientific AI, quantum technologies, or fusion energy. These are all domains in which strategic value is high, while technological trajectories and institutional arrangements are still less settled than in more mature parts of the digital stack.
Europe too often competes where competition is already fiercest, mistaking intensity of rivalry for strategic importance. A better approach might be to choose arenas where strategic value is high and paths remain open. This requires science policy guardrails: clear funding conditions, explicit evaluation criteria, and feedback loops designed to make constraint a source of capability.
Germany’s High-Tech Agenda is already moving in this direction, prioritising AI, quantum, and fusion and backing them with milestone-driven roadmaps. This provides a practical opening for a science policy shaped by strategic necessity.
Necessity is the mother of invention, and quite likely the grandmother of sovereignty. If Europe succeeds in designing rather than merely enduring its constraints, sovereignty becomes the capacity to generate technological paths that would not otherwise exist.
Companion Resources
Interactive economic model & simulator: https://sovereignty-model-production.up.railway.app/
An interactive companion to this blog post. The tool lets you explore the economic model behind the argument: that Europe must shift resources from securing today’s technology toward building tomorrow’s.