As we mark Safer Internet Day, Professor Sandra Wachter, Professor of Technology and Regulation, spoke to us about the online phenomenon known as ‘deepfakes’, how to spot them and the implications of the growing proliferation of deepfakes for internet safety.
We’ve heard people talking about ‘deepfakes’ on the internet but what constitutes a ‘deepfake’ and how common are they?
Sandra: A “deepfake” refers to recreated media of a person’s appearance and/or voice by a type of artificial intelligence called deep learning (hence the name, deepfake). Deepfakes are typically fake images, videos, or audio recordings. You may have seen popular videos of celebrities or politicians saying something they are unlikely to say in real life. These are common examples of deepfakes.
Deepfakes rely on artificial neural networks, which are computer systems modelled loosely on the human brain that recognize patterns in data. Developing a deepfake photo or video typically involves humans feeding hundreds or thousands of images into the artificial neural network in the computer, and “training” it to identify and reconstruct patterns—usually faces.
The Digital Services Act in the European Union was recently enacted. The Online Safety Bill received Royal Assent last year and is now law, with Ofcom becoming the regulator for online safety. Are deepfakes included as part of these new laws or not? If not, what is the regulatory position concerning deepfakes. Is it illegal for people to create deepfakes or other AI-generated images?
Sandra: Per se it is not illegal to create Deepfakes. The main thing you have to think about when using this technology is who has potential rights over the input data. If you’re using pictures or text it might be that others have data rights or copyright over it and you will have to ask for consent first.
However, if you are creating a Deepfake that starts to infringe other people’s rights, then the aforementioned laws might be applicable. For example, if you are using Deepfakes to spread misinformation or for gender-based violence the rules of the Online Safety Bill and the Digital Services Act might be applicable. This can mean that the content will be removed and user accounts might be suspended or blocked.
How to spot deepfakes/ AI-generated content online
Is it possible that someone searching on the internet could spot deepfakes just from the audio of the content and what they hear online?
Sandra: At the moment it is still relatively simple to spot fake content. But the technology is evolving fast and is getting better at tricking our eyes and ears. Therefore, it is necessary to have legal and technical tools that help us to distinguish between fact and fiction. The European Union’s Artificial Intelligence Act for example will have a legal requirement that artificially created content is “watermarked”. That means that AI generated images or text will carry a mark to indicate that it was not created by a human. This will make it easier for us to not fall prey to misinformation campaigns, rumours, scandals or political propaganda.
What about people’s appearances? Could someone surfing the internet spot a deepfake from the subject’s facial expressions in the video? And would the finer details of someone’s appearance such as their hair, hands, fingers, teeth and facial expressions appear distorted?
Sandra: What we’ve been used to seeing with deepfakes is that they have giveaways, they haven’t been great at creating noses and eyes so people’s faces have not looked quite right, similarly colour has been a sign of a deepfake with them often appearing too vibrant. However, now that this is known, these giveaways are melting away as creators respond to the problems and pay attention to the areas that previously gave clues to content being a deepfake.
Are there any other things that people should look out for when they are trying to work out if an image on the internet is real or actually a deepfake?
Sandra: My best advice here would be for people to be curious about this technology and get to know it. By using the technology and becoming familiar with it, it will become easier to spot when it is being used. Avoiding or fearing the technology will make it harder to understand and identify it.
Impact of deep fakes and AI-generated content on internet safety
I’m conscious that with deepfakes and AI-generated content becoming more widespread on the internet, and we would expect this to continue particularly in a General Election year, what advice would you give to those internet users who are worried being caught out by fake content?
Sandra: I would say to be curious, question what you see, think about the source of what you are viewing, whether it seems plausible, if something seems not quite right. Again, familiarising yourself with the technology will help you to be an educated internet user.
Watch Professor Sandra share her insights about everything you need to know about deepfakes technology.
The way ahead
The Governance of Emerging Technologies group and its lead researchers, Professor Sandra Wachter, Professor Brent Mittelstadt and Professor Chris Russell, have produced a range of seminal papers in the field of AI and ethics, which have significant relevance to the debates raging today about regulating AI and making it more transparent, ethical, and ultimately, more trustworthy.
Find out more about their latest research:
OII | Programme on the Governance of Emerging Technologies (ox.ac.uk)