Skip down to main content

Introducing the Pervert’s Dilemma: A Contribution to the Critique of Deepfake Pornography

Introducing the Pervert’s Dilemma: A Contribution to the Critique of Deepfake Pornography

Published on
22 Nov 2019
Written by
Mark Malbas

OII DPhil candidate Carl Öhman answers questions about his new paper Introducing the Pervert’s Dilemma: A Contribution to the Critique of Deepfake Pornography, which looks at the ethical challenges around pornographic video doctoring.

So, for the uninitiated, what is Deepfake pornography?

Carl: Deepfakes broadly refer to hyper-realistic videos in which a person’s face has been analysed by a Deep Learning algorithm, and then superimposed on top of the face of an actor in another video. In a way it works almost like a human brain — the Deep Learning algorithm “learns” from the informational input it is fed and is then able to generate its own amalgamation of it. One can say that it like an artificial, or at least an augmented, fantasy. Although previously only accessible to Hollywood CGI experts—Deepfakes are now also commonly used to create pornographic videos starring female celebrities such as Gal Gadot and Emma Watson. This is a development we need to take very seriously.

How widespread is it?

Carl: There is not yet any reliable statistic on how widespread the phenomenon is. But it first emerged outside of the professional arena in 2017 and exploded in sophistication and popularity during early 2018. Although sites such as Pornhub, Reddit, and Twitter have banned Deepfake porn, this has in turn led to the launch of number of new sites that are specially devoted to sharing, creating, and teaching users how to make their own Deepfakes. Moreover, the launch of programs like FaceApp made it possible for amateurs and enthusiasts to create their own Deepfake videos using the app and a piece of video material of the person whose face they were interested in using.

What ethical challenges arise from the Deepfake phenomenon?

Carl: Most people intuitively feel that Deepfake Porn is unethical. But this intuition is insufficient as a basis for regulatory action. Indeed, if we take this seriously, we must conduct careful ethical analysis to pin down exactly in what sense Deepfakes are harmful, and make sure that our arguments are logically sound. And this is sometimes easier said than done.

In my most recent study I point to a particularly urgent area of concern: Although Deepfakes can be condemned on the basis that they may ruin a person’s reputation, this only captures part of the problem. It inevitably leads us to arguments that imply that Deepfakes are morally permissible as long as they are not shared. A private Deepfake app, which can only be used by one individual (perhaps using finger print technology) would thus be deemed morally neutral. But our intuition tells us this is wrong. The shared content is only part of the issue.

The problem is that it seems very difficult to find a justification for this intuition which does not simultaneously condemn actions we don’t normally find morally impermissible. Take sexual fantasies as an example. Both fantasies and Deepfakes are arguably no more than a virtual image generated by informational input that is publicly available. There is often no difference in terms of the labour required to produce them, and the “materiality” of the Deepfake doesn’t seem to carry any ethical significance in and of itself. Herein lies the dilemma, the sharing aspects aside, there appears to be no obvious morally significant difference between Deepfakes and fantasies, yet we feel intuitively uncomfortable with the former.

Can this dilemma be solved?

Carl: Luckily yes, I believe so. It is possible that a number of logically coherent solutions to the dilemma are possible. And I would be very curious to see alternatives to the one I’ve come up with. I propose that the dilemma can be solved using the method of Levels of Abstraction, commonly employed in the philosophy of information. I will not go into the details here, but in short I suggest that moral actions can be harmful on multiple levels. Hate crimes are a good example. A hate crime is harmful both to the individual victim, and to the collective identity of which the victim is a member. In my study, I argue that some actions are only harmful on one of these levels. Deepfakes, for example, may not cause any direct harm to the depicted individual, but may nevertheless cause a threat to the larger collective identity of which they are part. This implies that the moral significance of Deepfake porn stems from the social level, and not just the individuals involved. As one of my colleagues put it “you cannot take gender out of pornography” and you cannot take society out of gender. Sexual fantasies, in contrast, appear not to be a gendered phenomenon. And even if their content may sometimes be impermissible, they are not harmful as a phenomenon. This short summary naturally leaves several questions unanswered, so I refer sceptical or interested readers to the full article which is much more technical.

So, are deepfakes disproportionately problematic for women or men?

Carl: Undoubtedly yes. Since the software works just as well with input data from platforms such as Instagram and YouTube, it is reportedly also used to create content based on the faces of ex-girlfriends and mere acquaintances. Whereas this theoretically makes anyone a potential target of Deepfake Pornography, the phenomenon so far appears to be heavily gendered. Like most pornographic content, it is predominantly produced by and for a male audience, although this time (fictionally) starring women who have not given their consent. This is why the ethical significance of the phenomenon stems from the collective level.

What can be done legally to prevent people from creating such videos/What are the next steps for research/inquiry?

Carl: One approach has been to tackle the problem with copyright law. But there are some difficulties. Since the algorithm has “learned” the face’s features from different angles, and how it moves in different expressions, it doesn’t necessitate any privacy infringement or illicit information access. It can be done with publicly available pictures or video material which are redundant once the material has been created. Moreover, it is hard to define some limits to the extent to which one has a right to one’s image or likeness. How similar does an image have to be in order for you to lay claim to it? These are inevitably ethical and philosophical questions. Given how fast technology is developing, and the need for regulatory responses, the ethical community must start working on such questions immediately. I hope other ethicists will join.

About the author:

Carl Öhman is a doctoral candidate at the Oxford Internet Institute, supervised by Professor Luciano Floridi. He holds a research grant from technology doctor Marcus Wallenberg’s foundation for education in international industrial enterprise. His interests fall at the intersection between economic sociology and ethics. More specifically, Carl’s research looks at the ethical challenges regarding commercial management of ‘digital human remains’, data left by deceased users on the Internet.

Carl graduated from Oxford 2016 with the award-winning thesis The Political Economy of Death in the Age of Information: A Critical Approach to the Digital Afterlife Industry. Prior to the OII, he studied for a B.A. in Sociology and Comparative literature at Uppsala University Sweden, where he explored autobiographical strategies among social media users, and the political economy of Twitter hashtags.

Related Topics