A Real Account of Deep Fakes

79 Pages Posted: 16 May 2024

See all articles by Benjamin Sobel

Benjamin Sobel

Cornell University - Cornell Tech NYC

Date Written: May 15, 2024


Laws regulating deepfakes are often characterized as protecting privacy or preventing defamation. But examining anti-deepfakes statutes reveals that their breadth exceeds what modern privacy or defamation doctrines can justify: the typical law proscribes material that neither discloses true facts nor asserts false ones. Anti-deepfakes laws encompass harms not cognizable as invasion of privacy or defamation—but not because the laws are overinclusive. Rather, anti-deepfakes laws significantly exceed the dignitary torts’ established boundaries in order to address a distinct wrong: the outrageous use of images per se.

The mechanism by which non-deceptive, pornographic deepfakes cause harm is intuitively simple, yet almost entirely unexamined. Instead, legislators and jurists usually behave as if AI-generated images convey the same information as the photographs and videos they resemble. This approach ignores two blindingly obvious facts: deepfakes are not photographs or video recordings, and often, they don’t even pretend to be. What legal analysis of deepfakes has lacked is a grounding in semiotics, the study of how signs communicate meaning.

Part I of this Article surveys every domestic statute that specifically regulates pornographic deepfakes and distills the characteristics of the typical law. It shows that anti-deepfakes regimes do more than regulate assertions of fact: they ban disparaging uses of images per se, whether or not viewers understand them as fact. Part II uses semiotic theory to explain how deepfakes differ from the media they mimic and why those differences matter legally. Photographs are indexical: they record photons that passed through a lens at a particular moment in time. Deepfakes are iconic: they represent by resemblance. The legal rationales invoked to regulate indexical images cannot justify the regulation of non-deceptive deepfakes. Part III in turn reveals—through a tour of doctrines ranging from trademark dilution to child sexual abuse imagery—that anti-deepfakes laws are not alone in regulating expressive, non-deceptive uses of icons per se. Finally, in Part IV, the Article explains why a proper semiotic understanding of AI-generated pornography is vital. Lawmakers are racing to address an oncoming deluge of photorealistic, AI-generated porn. We can confront this deluge by doubling down on untenable rationales that equate iconic images with indexical images. Or we can acknowledge that deepfakes are icons, not indices, and address them with the bodies of law that regulate them as such: obscenity and an extended version of the tort of appropriation.

Keywords: deepfake, privacy, ai, artificial intelligence, pornography, sexual privacy, semiotics, peirce, privacy, speech, First Amendment, expression

JEL Classification: K24, O34, K38, K14, K15

Suggested Citation

Sobel, Benjamin, A Real Account of Deep Fakes (May 15, 2024). Cornell Legal Studies Research Paper Forthcoming, Available at SSRN: https://ssrn.com/abstract=4829598 or http://dx.doi.org/10.2139/ssrn.4829598

Benjamin Sobel (Contact Author)

Cornell University - Cornell Tech NYC ( email )

2 West Loop Rd.
New York, NY 10044
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics