Without anticipatory governance, quantum technologies risk going down the same ethical slippery slope that AI is currently on. A new comment piece by Professor Mariarosaria Taddeo, published in leading journal Nature, proposes six principles for responsible design and development of quantum technologies for defence.
Quantum technologies are being adopted by national defence departments all over the world for their ability to sharpen detection, streamline communication systems, and improve navigation, among other benefits.
However, like other emerging technologies including AI, it poses ethical risks, according to Professor Mariarosaria Taddeo, Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, who has published an agenda-setting comment piece on the topic in Vol. 364 of Nature, published 24 October 2024.
Quantum technologies can create new molecules and forms of chemical or biological weapons, break cryptographic measures that keep online communications secure, and breach privacy and freedom of communication.
“That is why it is crucial to develop ethical governance that is focused specifically on quantum technologies, including our suggestions of categorizing risks according to knowns and unknowns, building in multilateral collaboration and oversight, and collaborating with civil society for its benefit,” says Professor Mariarosaria Taddeo.
The comment piece draws parallels between “lessons learnt from AI ethics” and quantum technologies, specifically the ‘neutrality thesis’ assumption that technology itself is ethically neutral and therefore does not require governance until ethical implications emerge after it has been used.
For example, design and development decisions dictate whether AI models are subject to bias. Because AI governance has lagged behind the rate of development of those technologies, a host of issues have emerged that policymakers are only just beginning to grapple with, from who has access to codes to how much energy AI systems use.
“As scholars and policymakers have become more aware of the real risks that AI systems pose, most now agree that ethical analyses ought to inform the entire AI lifecycle,” Taddeo adds. “So it is with quantum technologies: governance should accompany each moment of the innovation life cycle, with measures that are designed to support it and that are proportionate to the risks each moment poses.”
The comment piece recommends an independent oversight body for quantum technologies in defence that can help mitigate these ethical risks in a collaborative way, accounting for the perspectives of other relevant parties and experts involved, such as physicists, engineers, national defence and security practitioners, international humanitarian law, human rights, ethics of technology and war, and risk assessment.
While the authors acknowledge the demands of time, funding, and human resources that an “anticipatory ethical governance” approach requires, Taddeo concludes: “ignoring the need for ethical governance now to sidestep these costs is a path to failure — addressing harms, correcting mistakes and reclaiming missed opportunities later on will be much more costly.”
Read the full article.
About the commentary: The article, ‘Consider the ethical impacts of quantum technologies in defence now’ by Mariarosaria Taddeo, Alexander Blanchard and Kate Pundyk is available to download in Vol. 364 of Nature, published 24 October 2024. The research is part of a bigger project on the Ethics of AI for National Defence funded by the Defence Science Technologies Laboratory (Dstl) of the UK Ministry of Defence.
About the authors: Lead author Mariarosaria Taddeo is Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, University of Oxford, UK, a Defence Science Technologies Laboratory (Dtsl) Ethics Fellow at the Alan Turing Institute, London, UK, and a member of the AI Ethics Advisory Panel to the UK Ministry of Defence. Nature co-author Alexander Blanchard is senior researcher at the Stockholm International Peace Research Institute, Sweden. Kate Pundyk is research assistant at the Oxford Internet Institute, University of Oxford, UK.