Ocularcentrism and Deepfakes: Should Seeing Be Believing?
28 Pages Posted: 30 Nov 2020
Date Written: October 22, 2020
Since its unholy beginnings in pornography, deepfake technology has, understandably, been the subject of widespread criticism and outrage. At an individual level, pornographic and other harmful kinds of deepfakes can cause significant psychological and reputational harm. At the collective level, the dissemination of deepfakes affects our ability to differentiate authentic from inauthentic content, rendering us more vulnerable to misinformation. This effect, however, is not limited to deepfakes; photographs and videos have long been vulnerable to manipulation. The problem, then, is not deepfakes per se, but our uncritical and disproportionate reliance on technology-mediated sight, and our insistence that seeing is believing. The purpose of this Article is firstly to understand the historical persistence of ocularcentrism (and our vulnerability to misinformation), and secondly, to situate deepfakes within this social history – do deepfakes represent the limit of our tolerance for visual manipulation, and if so, why? Do they truly threaten visual truth in a way that earlier technologies have not? And does this cause us to abandon ocularcentrism, or to cling desperately to the credibility of visual evidence?
To date, existing scholarship on deepfakes has failed to differentiate between, and tailor solutions for, the individual and collective harms associated with their dissemination. Such tailoring is needed to preserve the substantial utility deepfakes offer. Deepfake audio has recreated the speech that John F. Kennedy was meant to deliver shortly before his assassination, using recordings of 831 speeches he delivered in his lifetime, and offering hope to patients who have lost their voices to illness. Researchers have used deepfake technology to create animated, photorealistic avatars of deceased persons, and portrait subjects. Museum visitors can interact with life-size deepfakes of long-dead artists, constructed from archival footage. Deepfake technology can be used to anonymize vulnerable sources; generate multilingual voice petitions, produce synthetic MRI images that protect patient privacy, synthesize news reports, improve video-game graphics, reverse the aging process; and elevate fanfiction. If, like most forms of technology, deepfakes are capable of both beneficial and harmful use, how should the technology be regulated to maximize its utility, and minimize its harm?
This Article will explore this question through the lens of copyright law and policy. The creation of deepfakes depends heavily on access to, and manipulation of, audiovisual content, much of which is protected by copyright law. Accordingly, copyright represents a natural lens through which to evaluate the unique social issues raised by the creation and dissemination of deepfakes. Part II will explain the technical process by which deepfakes are created, and evaluate whether a deepfake video would constitute transformative fair use. Part III will discuss both the individual and collective harms generated by the dissemination of deepfakes, including the erosion of our ability to differentiate authentic from inauthentic content. It will interrogate the historical basis for the normative claim that seeing is believing, and problematize the role of ocularcentrism in promoting both surveillance, and misinformation. Part IV will evaluate the variety of legal and regulatory measures that have been proposed to address the harms caused by deepfakes. Part VI will conclude with final thoughts.
Keywords: intellectual property, copyright, deepfake, misinformation, ocularcentrism
JEL Classification: K10, K11, K39
Suggested Citation: Suggested Citation