Deepfake Privacy: Attitudes and Regulation
50 Pages Posted: 11 Feb 2021 Last revised: 24 Feb 2021
Date Written: February 8, 2021
Using only a series of images of a person’s face and publicly available software, it is now possible to insert the person into a video and show them saying or doing almost anything. This deepfake technology has permitted an explosion of political satire and, especially, fake pornography. Several states have already passed laws regulating deepfakes and more are poised to do so. This Article presents a novel empirical study that assesses public attitudes towards this new technology. Our representative sample viewed nonconsensually-created pornographic deepfake videos as extremely harmful and overwhelmingly wanted to criminalize them. Labeling pornographic deepfakes as fictional did not mitigate the videos’ perceived wrongfulness. In contrast, nonpornographic deepfakes were viewed as substantially less wrongful when they were labeled as fictional or did not depict inherently defamatory conduct like illegal drug use. Based on the types of harms perceived in this study, we argue that prohibitions on deepfake pornographic videos should receive the same treatment under the First Amendment as prohibitions on nonconsensual pornography rather than being dealt with under the less-protective law of defamation. In contrast, nonpornographic deepfakes can likely only be dealt with via the law of defamation, but there may be reason to allow for enhanced penalties or other regulation based on the greater harm participants perceive from a defamatory deepfake than a defamatory written story.
Keywords: privacy, law and psychology, empirical legal studies, deepfakes, morphed videos, nonconsensual pornography
JEL Classification: K10, K30
Suggested Citation: Suggested Citation