Where's the Liability in Harmful AI Speech?

47 Pages Posted: 15 Aug 2023

See all articles by Mark A. Lemley

Mark A. Lemley

Stanford Law School

Peter Henderson

Princeton University - Center for Information Technology Policy; Princeton University - Princeton School of Public and International Affairs; Princeton University - Program in Law & Public Policy; Princeton University - Department of Computer Science

Tatsunori Hashimoto

Stanford University - Stanford University, Students

Date Written: August 3, 2023

Abstract

Generative AI, in particular text-based “foundation models” (large models trained on a huge variety of information including the internet), can generate speech that could be problematic under a wide range of liability regimes. Machine learning practitioners regularly “red-team” models to identify and mitigate such problematic speech: from “hallucinations” falsely accusing people of serious misconduct to recipes for constructing an atomic bomb. A key question is whether these red-teamed behaviors actually present any liability risk for model creators and deployers under U.S. law, incentivizing investments in safety mechanisms. We examine three liability regimes, tying them to common examples of red-teamed model behaviors: defamation, speech integral to criminal conduct, and wrongful death. We find that any Section 230 immunity analysis or downstream liability analysis is intimately wrapped up in the technical details of algorithm design. And there are many roadblocks to truly finding models (and their associated parties) liable for generated speech. We argue that AI should not be categorically immune from liability in these scenarios and that as courts grapple with the already fine-grained complexities of platform algorithms, the technical details of generative AI loom above with thornier questions. Courts and policymakers should think carefully about what technical design incentives they create as they evaluate these issues.

Keywords: Public Law, Cyberspace, Tort Law, First Amendment, AI, Generative AI

Suggested Citation

Lemley, Mark A. and Henderson, Peter and Hashimoto, Tatsunori, Where's the Liability in Harmful AI Speech? (August 3, 2023). Available at SSRN: https://ssrn.com/abstract=4531029 or http://dx.doi.org/10.2139/ssrn.4531029

Mark A. Lemley (Contact Author)

Stanford Law School ( email )

559 Nathan Abbott Way
Stanford, CA 94305-8610
United States

Peter Henderson

Princeton University - Center for Information Technology Policy ( email )

C231A E-Quad
Olden Street
Princeton, NJ 08540
United States

Princeton University - Princeton School of Public and International Affairs ( email )

Princeton University
Princeton, NJ 08544-1021
United States

Princeton University - Program in Law & Public Policy ( email )

Wallace Hall
Princeton, NJ 08544
United States

Princeton University - Department of Computer Science ( email )

35 Olden Street
Princeton, NJ 08540
United States

Tatsunori Hashimoto

Stanford University - Stanford University, Students ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
757
Abstract Views
3,096
Rank
71,886
PlumX Metrics