Jason I. Kim, Visiting Policy Fellow, Oxford Internet Institute, considers how digital ecosystems learn to defend at the speed of change.
The high-stakes battle for AI supremacy is accelerating faster than anyone predicted, driven by the frantic pace of releases from major industry leaders like OpenAI, Anthropic, Google, Meta, xAI, and Alibaba, among others. This explosion of access is fuelling a vast, decentralized system, where the total number of specialized AI models is projected to reach 2.5 million by the end of 2025. This frantic pace is putting digital platforms to the test regarding their goals towards ensuring user trust and platform safety.
While LLMs provide incredible capabilities, the speed of this “LLM Arms Race” is outstripping the capacity of formalized governance and updated policies. This is creating a flood of new security risks and eroding social trust, driving a frantic race among digital platforms and developers to mitigate them.
The Low-Cost Pathway to High-Level Cybercrime
Gone are the days when grammatical errors and awkward phrasing were tell-tale signs of a phishing attempt. LLMs can generate ‘perfect scams’ that are grammatically flawless, contextually relevant, and personalized to an astonishing degree. This speed of creation and sheer volume fundamentally erode online trust, making it almost impossible for users or even trust and safety systems to distinguish legitimate communication from AI-driven attacks.
The most immediate danger is the rapid democratization of powerful offensive tools that bad actors can customize. The speed of the LLM race means that capabilities that once required sophisticated, costly human expertise are now accessible through a simple prompt. An illustration of this danger can be found in a recent IBM study, which details the sudden drop in the barrier to entry for complex attacks such as multistage phishing campaigns, personalized business email compromise schemes, and rapid generation of undetectable malware code.
Adversaries are building their own models. The emergence of specialized, “WormGPT equivalent” models explicitly designed for harmful purposes such as writing sophisticated malware, generating phishing kits, and automating account takeover attempts has fundamentally changed the cyber threat landscape. This is the low-cost pathway to high-level cybercrime, enabling even novice threat actors to execute complex, multi-stage attacks.
The Turbocharged Threat of Fluent Disinformation
The LLM Arms Race is escalating the challenge of fluent disinformation from a nuisance to a systemic threat to online discourse. The frightening innovation is that the threat is no longer passive. As a recent study shows, the fusion of LLMs with multi-agent “swarm” architectures allows systems to coordinate autonomously, infiltrate online communities, and fabricate a widespread synthetic consensus cheaply and at a massive scale.
Disinformation has evolved from high-volume content creation to coordinated, high-volume dissemination and interaction. These AI agents can operate extensive networks of social media profiles, adjusting their messages in real time based on feedback and interactions. They can hold individualized conversations with users, convincingly imitating human behavior to gain trust and strategically spread misinformation.
This volume and fluency overwhelm traditional fact-checking mechanisms and media ecosystems, making it more challenging than ever for citizens to discern verifiable information from sophisticated fabrication. The information environment is at risk of being drowned in machine-generated consensus.
Ecosystem Response: How Digital Life is Innovating Against Speed
While the risks are severe, the sheer velocity of the LLM Arms Race is paradoxically spurring and driving innovation across digital ecosystems to create real-time defences. The need to maintain user trust and platform safety has spurred rapid innovation, carving out space for creative defences operating at a previously unmatched velocity.
For example, the proliferation of AI-generated content has driven massive innovation in AI detection and provenance tools. News organizations, social media platforms, and security vendors are rapidly developing new models to detect AI fingerprints or implementing watermarking standards like C2PA to verify content origin.
Another positive response to the LLM Arms Race is the rapid advancement of open-source LLMs and Small Language Models (SLMs). SLMs help to further democratize access to foundational technology and fuel innovation through accessibility and specialization. Companies and developers are releasing high-performance models that often match or even surpass proprietary offerings, but at a much lower cost. This promotes transparency in architecture, reduces reliance on a single company, and enables small businesses and academic researchers to deploy state-of-the-art AI.
The New Paradigm: Resilience Through Decentralized Power
Unlike earlier arms races that centralized power, this LLM-driven competition is democratizing powerful technology and accelerating innovation. This accessibility means highly effective tools are within reach of everyone, from cybercriminals to developers. The main challenge has fundamentally shifted. It is no longer about controlling access to these technologies. Instead, the priority must be building secure, resilient platforms that can rapidly recover from attacks. The rise of SLMs reinforces this decentralized power by enabling data sovereignty and on-device edge deployment, mitigating privacy risks for sensitive data. Combating these emerging threats requires policymakers, academics, and industry to come together and champion open-source tools and data-sharing protocols.
About the author
Jason I. Kim is a leading expert in trust and safety, artificial intelligence (AI), and public sector innovation. He currently serves as the Director of Data Science and Analytics for Google, where he oversees a team responsible building trusted experiences across Google’s broad product portfolio. He has held the position of Visiting Policy Fellow at the OII since April 2024.
Find out more about the OII’s Visiting Policy Fellowship programme.