An AI’s Picture Paints a Thousand Lies: Designating Responsibility for Visual Libel
Journal of Free Speech Law | 3:425 | 2023
30 Pages Posted: 31 Aug 2023
Date Written: August 3, 2023
Abstract
The visual depictions possible through various generative AI systems have advanced far past the point where a casual observer can determine whether an image is real or synthetic. Synthetic, fake videos have been used for disinformation in the Russian war against Ukraine, to produce countless instances of revenge pornography, to create artificial news anchors in South Korea, and to generate fictional social media influencers. These tools have both compelling commercial applications and potential for significant weaponization. Combined with the tendency of some generative AI systems to hallucinate or produce false information the AI claims to be accurate, the concerns over potentially libelous and harmful visual content will only grow. This article focuses on responsibility and liability for libelous publication of generative synthetic media. The article explores the legal consequences when an AI system itself generates false and harmful images to determine which parties, if any, would be liable for the damage caused by such publication. By providing this framework, the article also identifies the steps parties involved in the AI content production chain can take to protect individuals from the misuse of these systems.
Note: This is an Accepted Manuscript of an article published in the Journal of Free Speech Law originally available online at: https://www.journaloffreespeechlaw.org/
Keywords: Artificial Intelligence, Generative AI, synthetic media, liable, copyright, defamation, photograph, photorealism, Large language models, GAN, adversarial networks, generative AI, First Amendment protection, constitutional rights, Cyberspace Law, Public Law, First Amendment
JEL Classification: K00, K1, K13, K23, K14, L82, L86
Suggested Citation: Suggested Citation