In Conversation with Professor MariaRosaria Taddeo, Oxford Internet Institute
The last decade has seen a global race for the development and use of AI in defence. Machine learning algorithms and predictive analysis are increasingly used to manage logistics, analyse vast amounts of data, and support decision making in the use of force.
However, the use of AI for national defence poses important ethical concerns. These challenges merge longstanding ethical dilemmas surrounding the use of force in warfare with new risks posed by AI, such as reduced human control, diluted responsibility, devaluation of human skills, and erosion of self-determination.
In her new book, The Ethics of Artificial Intelligence in Defence (OUP), Professor Mariarosaria Taddeo provides a comprehensive overview of these issues. She explores how AI is employed in intelligence analysis, cyber warfare, and autonomous weapon systems. Taddeo also builds an ethical framework based in AI ethics and Just War theory to answer the question — how can AI in defence be used for good? And what is needed to develop an ethical governance of AI in defence?
We caught up with Mariarosaria to discuss some of the issues raised in her book.
David: Does AI present fundamentally new questions when it comes to defence, i.e. is there a gap in our norms and understanding? Or can we easily apply the international norms that have already been developed to regulate warfare to the next generation of technologies? What changes when we introduce AI?
Mariarosaria: It’s a mix of both. On the one hand, the ethical principles underpinning international humanitarian laws are still valid when we think about an AI-driven defence. On the other hand, their application is problematic. Consider for example the attribution of responsibility for war crimes. This is a crucial element to maintain the morality of war. The Nuremberg trials reminds us that, “Crimes against international law are committed by men, not by abstract entities, and only by punishing individuals who commit such crimes can the provisions of international law be enforced” (International Military Tribunal,Nuremberg, 1947, 221).
When considering actions performed by AI systems, attributing this responsibility is problematic ,whether autonomous weapon systems or simply systems supported for decision making,. This is in part due to the rather distributed way in which AI systems are developed and used, which makes it hard to reverse engineer the chain of decisions and actions that led to undesirable outcomes. But mostly, this has to do with the limited predictability of AI systems, which can learn and perform behaviours that neither the designer nor the user ever intended. When there is not intentionality, responsibility – at least moral responsibility- cannot be attributed.
We need to find a new way to attribute responsibility for the actions performed by AI systems in defence and to make sure that this way is justified and fair. To do so we need new ethical thinking.
David: What sorts of autonomous weapons systems are already being used, or anticipated? And where might we see AI increasingly being used in warfare?
Mariarosaria: The first reported use of fully autonomous weapon systems dates back to 2021 in Libya. The real watershed, however, has been the war in Ukraine, where both sides have used autonomous weapon systems, and developed and tested new ones.
Russia has used weapons like The KUB-BLA (also known as the KYB-UAV) drone, a loitering munition developed by Kalashnikov and Zala Aero Group. In Ukraine, we have seen the use of both repurposed commercial drones and of autonomous weapon systems developed and tested locally, such as those produced by Vyriy.
At the moment, a human operator is still, somehow, in the loop, but it remains to be seen to what extent they have control over the decision that drones make. The trend is to develop technology that increasingly sidelines human judgment in decisions about targeting and firing. The widespread accessibility of ready-made devices, user-friendly software, advanced automation algorithms, and specialised AI microchips has propelled a dangerous innovation race pushing us into uncharted territory, and ushering us in a new era of autonomous weapons.
David: I guess what already we know concerning everyday use of AI – errors, bias, lack of norms, poor regulation, lack of an overall plan via-à-vis its role in society) doesn’t inspire much confidence that attaching AI capabilities to weapons systems will be a straightforward process. What is motivating governments and militaries to move into this space? What are the perceived benefits of these systems?
Mariarosaria: From a defence point of view, these weapons can bring great tactical advantage and keep soldiers safe, as the machines engage directly with the enemy. Also, defence (and war particularly) is a competitive game: one needs to match at least the capabilities of one’s opponent to stand a chance of winning.
AI is now considered a key capability. Errors, bias, lack of control, governance vacuum have not prevented us from deploying AI in other high-risk domains of our societies, such as healthcare and justice. Defence will be no exception.
The questions are all normative here: What level of error, bias, lack of control are we, as liberal democratic societies, willing to accept? What types of decisions do we think it is appropriate to delegate to AI systems in war. The way we run our defence, and wage war, is a reflection of the seriousness with which we take our values and rights. An AI-driven defence is unavoidable, and I believe it would be reckless to discard AI capabilities in defence, especially in the current international scenario.
The question is how do we shape the use of this technology in a way that it does not conflict or violate those values and rights. This is imperative. The alternative would be for liberal democracies to defeat themselves, as they would defend themselves by violating the very values and rights that set them apart from their opponents.
David: Given hybrid warfare, disinformation, false flag operations — that is, the extreme messiness of the information environment within which wars are waged — it will presumably be increasingly difficult to know “who pressed the button”, if there are layers of automated, possibly very opaque (and secret, possibly proprietary), decision making systems involved. What are the main ethical issues of relying on AI systems, with humans perhaps increasingly detached from the decision-making process?
Mariarosaria: The key ones concern the attribution of moral responsibility, which we already discussed. The germane issue here is the one of control. This is a very complex topic., It is very hard to define control, to understand the best way of framing control of AI, especially as we move toward hybrid teams of human and artificial agents. More work is needed on this concept.
But going back to your question, to me the point is not so much ‘who presses the button’ as much as ‘can we unplug the systems in a timely and effective way when we realise that it is making a mistake’? Do we have sufficiently trained personnel to realise that the system is making a mistake? Are they allowed sufficient time and autonomy to intervene?
David: You use “Just War Theory” as a framing throughout your book — what is that?
Mariarosaria: Just War Theory is an ethical theory to determine justified and just (permissible) behaviour in war. It goes back to Cicero and Thomas Aquinas. It addresses the conduct of state actors about to enter war (jus ad war), in war (in bello), and post war (post bellum). It provides the fundamental principles that underpin international humanitarian laws – for example, the principle of distinction between combatants and non-combatants and of proportionality of responses in war. This theory is the map on which the entire body of international humanitarian laws, laws of conflict, and the Geneva conventions have been charted.
David: Your book considers the use of AI in defence, for example used as detection, analysis and decision-making agents to support conventional kinetic weapons like missiles. I guess AI could also be used as an attack agent – to cripple or take over an adversary’s systems, or indeed domestic infrastructure. Is this fundamentally different from its use in defence, or are we always dealing with a continuum of possible actions?
Mariarosaria: I take defence to encompass the entire spectrum of decisions, processes, and operations that underpin the effort of a state actor to defend itself. This includes attacks. It is a continuum of actions in this sense. Whether an operation is a defence, deterrence, or attack, it must adhere to the principles of distinction and proportionality.
David: It isn’t always clear that adversaries will necessarily be foreign governments — many governments will view their own populations as a threat. Is there a blurring of military and civilian uses (and industries) in the military-AI space, or are they conceptually very distinct? For example, when we consider how these technologies might be used against non-military, even civilian, populations — protesters, migrants, minorities, separatists, etc.
Mariarosaria: Defence implies state vs state activities, or state vs terrorist activities. Other uses, like the one you mentioned, do not belong to the area of defence per se, so ethical analyses need to be developed on different grounds than Just War Theory, for example the protection of human rights.
There is one aspect to note about the blurring of civilian and military categories and the ones of combatants and non-combatants. Several AI companies such as Open AI, Amazon, Google, have become defence contractors. This poses questions as to whether they could be legitimate targets for our opponents. In the same way, in Ukraine several services and platforms have been provided to non-combatants to report information about movements of the Russian army or report of war crimes. This also poses questions as to whether citizens who provide this information lose their non-combatant status and thus can be targeted by the Russians.
David: It’s easy to become extremely depressed about the rapid roll-out of AI outpacing our ability to think rationally and deeply about the possible harms to society. What positives are there in the AI-defence space? What might we hope for? And what would we need for these developments to be seen as controlled, reasonable and stabilising, rather than deeply worrying?
Mariarosaria: Indeed. We shouldn’t buy into the illusion that AI is a magic wand to make our defence systems invincible—it’s not. AI is a fragile technology, and if mishandled, the repercussions could be catastrophic. Instead of rushing in, we need time to grasp its strengths and flaws, engage in public debates about what’s morally acceptable, and establish robust governance for its role in defence.
In the EU, it took nearly a decade to regulate AI and digital technologies for everyday life. We must apply the same rigour to AI in defence, learning from existing frameworks while committing to the effort it requires. Accelerating the process is possible, but only with serious dedication.
A good starting point? Bring AI-driven defence into the spotlight of public discourse. Let’s not shove defence into the shadows, dismissing it as despicable or unworthy of attention. Defence protects the values and interests of our societies. It’s vital for citizens to scrutinise and guide defence organisations to ensure they uphold the principles that define us.
***
Professor Mariarosaria Taddeo was talking to the OII’s Scientific Writer, David Sutcliffe.
Mariarosaria Taddeo is Professor of Digital Ethics and Defence Technologies at the OII. Her recent work focuses on the ethics and governance of digital technologies, and ranges from designing governance measures to leverage AI to addressing the ethical challenges of using defence technology in defence, ethics of cybersecurity, and governance of cyber conflicts. Find out more about her research.
Read the book: Mariarosaria Taddeo (2024) The Ethics of Artificial Intelligence in Defence. OUP.