Skip down to main content

AI is not an agent – AI is a tool

AI computer parts concept

AI is not an agent – AI is a tool

Published on
6 Aug 2025
Written by
Anoush Margaryan
Professor Anoush Margaryan, Visiting Research Fellow at the OII considers how we should frame AI: competitor, collaborator, or tool?

AI is not an agent – AI is a tool

In a recent interview on The Daily Show, Yuval Harari claimed: ‘The most important thing to understand about AI is that AI is not a tool, it is an agent. It is the first technology in history that can make decisions and invent new ideas by itself…We have created something that can potentially take power away from us.’

This statement reflects an erroneous but increasingly dominant way of thinking about AI.

Broadly, three competing framings of AI have emerged in public and academic discourse: AI as a Competitor, AI as a Collaborator, and AI as a Tool. Each suggests a different view of agency, and each holds significant implications for how we design, govern, and live with AI.

AI as competitor: The displacement thesis

This framing positions AI as a rival, even a threat, to human capabilities. It underpins familiar narratives about technological unemployment, the obsolescence of human creativity, and fears of superintelligence. Harari’s remark belongs in this category.  Associated with longtermism and transhumanism – ideologies with strong tech-sector backing – this perspective assumes a future of machine superiority displacing human roles, values, and achievements. Mark O’Connell, in his ethnographic study of transhumanist communities, notes that this perspective is rooted in a mechanistic view of life: humans are seen as imperfect machines, who should use AI to augment our bodies and minds so as to become more efficient.  He describes how this view reduces humans to “machines acting within the determinist logic of larger machines, biological components of vast and complex systems”. In this framing, agency is stripped from the individual and transferred to machines; or more accurately, as Thomas Fuchs reminds us, to those who design and control the machines.

AI as collaborator: The benevolent equal thesis

A more optimistic, but equally contested framing positions AI as a partner or collaborator. In contrast to the first perspective, the human remains ‘in-the-loop’ rather than being displaced. Concepts such as “human-AI collaboration”, “hybrid intelligence”, “mutual human-machine augmentation”, or even “human-AI cross-species companionship” dominate this framing, suggesting alignment and synergy between human and machine.

This framing is premised on “conjoined agency”, where humans and machines are seen as both having agency – a deeply problematic view. It anthropomorphises machines and equates algorithmic functionality with agency. While this framing attempts to counter AI supremacy narratives by highlighting AI’s potential to enhance human capabilities rather than replace them, it erroneously ascribes ontological equivalence to humans and machines. Critics, such as Kathleen Richardson, describe this as a form of “synthetical anti-humanism”, warning that the concept of human-machine “collaboration” dehumanises by blurring critical distinctions between persons and artefacts. Furthermore, critics of this view pointed out that the notion of genuine collaboration collapses  under scrutiny: “collaboration requires agents of similar status: capable of comparable forms of intentional action, moral agency, or moral responsibility”, which current AI systems are not.

The idea of human-machine collaboration is not new. In his 1960 seminal paper ‘Man-Computer symbiosis’ JCR Licklider envisaged that “mechanically-extended man would set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations, whereas computers would do all the routine work under ultimate human control”. Yet even then Licklider admitted that humans might remain “in the loop” more to serve machines than be served by them. Contemporary visions of AI trainers, explainers, and sustainers similarly recast humans as mere cogs in AI systems.   On closer examination, this framing risks eroding the centrality of the person by feeding into the posthumanist agenda of “de-centring the human’.  Yet as Jens Zimmermann pointed  out:  “To preserve and cultivate the centrality of the person is the only way to ensure a humane, healthy social order.”

AI as tool: The human-at-the-helm thesis

The third framing sees AI neither as a competitor nor a collaborator, but as an advanced tool that should remain firmly under human control and subject to human purposes.  This perspective places the human decisively at the centre, as the agentic subject using AI tools to act on an object to reach an outcome driven by the person’s own goals and purposes.  The AI as a Tool framing is grounded in a humanist understanding of agency: to be an agent is to act with forethought, intentionality, self-reactiveness and self-reflexivity, agentic features that machines are currently not capable of.

As philosophers Josiah Ober and John Tasioulas recently argued, AI systems should be developed as intelligent tools that enhance human flourishing, not as proxies for human agency.  Similarly, in 1933, a Russian philosopher Berdyaev warned us: “Technology has always been a means, a tool, and not a goal in and of itself. There can be no technological goals in life. There can be only technological means, while the aims of life must lie in some other arena, in the field of the spirit.”

The AI as a Tool framing elevates the man above the machine, placing great significance in individual’s power to question and govern technology through a human-centred framework that is grounded in human agency, human values, and human capabilities. In doing so, it rejects the techno-centrism and techno-solutionism inherent in the first two framings.

Why framing matters

The way we talk about AI influences how we imagine the future as well as how we design and deploy it, craft policies to regulate it, educate citizens to function productively and happily in a world permeated by AI. Simon Lindgren observed that “AI is, like other media, a bearer of ideologies, which means that AI plays a significant role in promoting and shaping ideas about what is considered true, important, and prioritized in society. Ideologies shape AI, and AI then shapes our ideologies.”

But if we accept that AI are tools, then we must be vigilant not to become too dependent on them. As AI becomes embedded in all spheres of our life, we risk becoming dependent not just on the tools themselves, but on the logic the encode, logic that may diminish autonomy and agency, privacy, exploit our attention, and affect human relationships.

Moving forward, the challenge is not simply to regulate AI, but to reclaim the centrality of the human. As Berdyaev said nearly 100 years ago: “A machine can be a great tool in our hands, in our victory over the power of elemental nature, but for this a person must be a spiritual being, a free spirit.”

Written by Professor Anoush Margaryan, Department of Digitalisation, Copenhagen Business School and Visiting Research Fellow, Oxford Internet Institute.

Disclosure:  Funding source: Carlsberg Foundation Monograph Grant CF23-1054 ‘Human Capabilities in the Age of AI”

Related Topics:

Privacy Overview
Oxford Internet Institute

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies
  • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

Google Analytics

This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

Enabling this option will allow cookies from:

  • Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains

These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.