ChatGPT and Intermediary Liability: Why Section 230 Does Not and Should Not Protect Generative Algorithms

28 Pages Posted: 9 May 2023 Last revised: 17 May 2023

Date Written: May 16, 2023

Abstract

ChatGPT, a generative artificial intelligence technology owned by OpenAI, has been at the center of the technology world this year. Such technologies have raised important social, political, and legal questions that have no easy answers. In this paper, I turn to one such legal question. Does Section 230—the internet law that provides immunity for users and platforms against claims based on third-party online content—protect OpenAI for ChatGPT outputs? I argue that in most cases it would not. Section 230 was designed to protect intermediaries of third-party content and it does not protect entities when they at least in part create content. ChatGPT does not function as an intermediary of third-party content but rather creates content in part by itself. This remains true regardless of whether the output is based on a training dataset or user input. I also explain the strongest counterarguments against this conclusion based on prevailing legal tests courts use to analyze Section 230 claims. I demonstrate that these tests are flawed but even applying the tests as they are, I conclude that OpenAI will not be protected by Section 230. Finally, I argue that it is unnecessary to immunize generative algorithms like ChatGPT at this time.

Keywords: Generative AI, Section 230, ChatGPT, Generative Algorithms

Suggested Citation

Ariyaratne, Hasala, ChatGPT and Intermediary Liability: Why Section 230 Does Not and Should Not Protect Generative Algorithms (May 16, 2023). Available at SSRN: https://ssrn.com/abstract=4422583 or http://dx.doi.org/10.2139/ssrn.4422583

Hasala Ariyaratne (Contact Author)

Georgetown University, Law Center ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
490
Abstract Views
1,354
Rank
112,888
PlumX Metrics