Skip down to main content

Asking “Whodunnit?” in an Age of Artificial Intelligence

Asking “Whodunnit?” in an Age of Artificial Intelligence

Published on
25 Feb 2019
Written by
Luciano Floridi

Traditionally, crime and punishment involved a human perpetrator and victim(s). Establishing the guilt and identity of the perpetrator has never been straightforward, but what happens when Artificial Intelligence is used to commit or facilitate a crime? At what point does the human actor become delinked from the criminal act itself?

We recently undertook the first analysis of its kind into the threats posed by Artificial Intelligence Crime (AIC). A full and systematic review of existing literature in this area showed that:

  • AIC could undermine existing liability models, threatening the dissuasive and redressing power of the law
  • Artificial agents may be able to learn to generate signals to financial markets that mislead other actors in the market
  • Drones and underwater vehicles, such as submarines, can be deployed by criminals to smuggle drugs if hardware and software is programmed in order to ensure no link back to the human deployer
  • ‘Buddy bots’ could be deployed on instant messaging programmes to offer drugs and pornography to children
  • Social bots can be deployed to spread hate messages on social media about an individual, again with limited ability to trace them back to their origins. They could also increasingly be used to steal peoples’ identity online
  • Artificial agents could be used to perpetrate torture more readily, the lack of compassion, emotion and physical detachment from the victim removing a sense of culpability
  • Interaction with ‘sexbots’ could desensitise people towards the perpetration of sexual offences.

So what do we do with these insights? Certainly, existing laws do not include a specific category of responsibility for crimes perpetrated by robots or other artificial agents. While we can laugh at the prospect of a rogue R2-D2 gone-bad standing trial, the idea of an online bot being held to account is even more far-fetched. So the burden of legal responsibility must remain with the humans who design, develop and deploy the technology, whether the engineers, customers, or users. There are of course questions about precisely where this liability begins and ends, and how regulation is designed so as not to stifle innovation in this area. We worked previously on this topic of “distributed responsibility” in another context.

All this points to the need to do a lot more work with policymakers, lawmakers, ethicists, developers, and civil society groups to understand how we can update the law to account for the AI age. We welcome opportunities to support this conversation.

Related Topics