AI Outputs Are Not Protected Speech
77 Pages Posted: 16 Feb 2024
Date Written: January 1, 2024
Abstract
AI safety laws are coming. Researchers, advocates, and the White House agree. Rapidly-advancing generative AI technology has immense potential, but it also raises new and serious dangers—deadly bioterrorism, crippling cyberattacks, panoptic discrimination, and more. Regulations designed to effectively mitigate these risks must, by technical necessity, include limits on what AIs are allowed to “say.” But according to an emerging scholarly consensus, this will raise grave First Amendment concerns, because generative AI outputs are protected speech.
This Article argues otherwise. AI outputs are not protected speech. The reason is simple. When a generative AI system—like ChatGPT—outputs some text, image, or sound, no one thereby expresses themselves. Or at least no one with First Amendment rights does. AIs themselves lack constitutional rights, so their outputs cannot be their own protected speech. Nor are AI outputs a communication from the AI’s creator or user. Unlike other software—video games, for example—generative AIs are not designed to convey any particular message. Just the opposite. Systems like ChatGPT are designed to be able to “say” essentially anything, producing innumerable ideas and opinions that neither creators nor users have conceived or endorsed. Thus, when a human asks an AI a question, the AI’s answer is no more the asker’s speech than a human’s answer would be. Nor do AI outputs communicate their creators’ thoughts, any more than a child’s speech is her parents’ expression. In such circumstances, First Amendment law is clear. Absent a communication from a protected speaker, there is no protected speech.
This, however, does not mean that AI outputs get no First Amendment protection at all. The First Amendment is capacious. It protects—albeit to a lesser extent—many things besides speech, like speech-facilitating activities and tools. For example: listening and loudspeakers, respectively. This Article explains why, as a matter of First Amendment law, free speech theory, and computer-scientific fact, AI outputs are best understood as fitting into one or more of these less-protected First Amendment categories. These insights will be indispensable to the regulatory project of making AI safe for humanity.
Keywords: AI, ChatGPT, Large Language Models, First Amendment, Free Speech, AI Safety, AI Risk, AI Catastrophe
Suggested Citation: Suggested Citation