AI Outputs Are Not Protected Speech

82 Pages Posted: 16 Feb 2024 Last revised: 21 May 2024

See all articles by Peter Salib

Peter Salib

University of Houston Law Center

Date Written: January 1, 2024


AI safety laws are coming. Researchers, advocates, and the White House agree. Rapidly advancing generative AI technology has immense potential, but it also raises new and serious dangers—deadly bioterrorism, crippling cyberattacks, panoptic discrimination, and more. Regulations designed to effectively mitigate these risks must, by technical necessity, include limits on what AIs are allowed to “say.” But according to an emerging scholarly consensus, this will raise grave First Amendment concerns because generative AI outputs are protected speech.

This Article argues otherwise. AI outputs are not protected speech. The reason is simple. When a generative AI system—like ChatGPT—outputs some text, image, or sound, no one thereby communicates. Or at least no one with First Amendment rights does. AIs themselves lack constitutional rights, so their outputs cannot be their own protected speech. Nor are AI outputs a communication from the AI’s creator or user. Unlike other software—video games, for example—generative AIs are not designed to convey any particular message. Just the opposite. Systems like ChatGPT are designed to be able to “say” essentially anything, producing innumerable ideas and opinions that neither creators nor users have conceived or endorsed. Thus, when a human asks an AI a question, the AI’s answer is no more the asker’s speech than a human’s answer would be. Nor do AI outputs communicate their creators’ thoughts, any more than a child’s speech is her parents’ expression. In such circumstances, First Amendment law is clear. Absent a communication from a protected speaker, there is no protected speech.

However, this does not mean that AI outputs get no First Amendment protection at all. The First Amendment is capacious. It applies—albeit less stringently—when the government indirectly burdens speech by regulating speech-facilitating activities and tools. For example, it applies to regulations of listening and loudspeakers. This Article explains why, as a matter of First Amendment law, free speech theory, and computer-scientific fact, AI outputs are best understood as fitting into one or more of these less protected First Amendment categories. These insights will be indispensable to the regulatory project of making AI safe for humanity.

Keywords: AI, ChatGPT, Large Language Models, First Amendment, Free Speech, AI Safety, AI Risk, AI Catastrophe

Suggested Citation

Salib, Peter, AI Outputs Are Not Protected Speech (January 1, 2024). Washington University Law Review, Forthcoming, U of Houston Law Center No. 2024-A--5, Available at SSRN: or

Peter Salib (Contact Author)

University of Houston Law Center ( email )

4104 Martin Luther King Blvd.
Houston, TX 77204
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics