The Law of AI is the Law of Risky Agents without Intentions

University of Chicago Law Review Online (forthcoming 2024)

Yale Law & Economics Research Paper

Yale Law School, Public Law Research Paper

10 Pages Posted: 12 Jun 2024

See all articles by Ian Ayres

Ian Ayres

Yale University - Yale Law School; Yale University - Yale School of Management

Jack M. Balkin

Yale University - Law School

Date Written: June 01, 2024

Abstract

Many areas of the law, including freedom of speech, copyright, and criminal law, make liability turn on whether the actor who causes harm (or creates a risk of harm) has a certain mens rea or intention. But AI agents—at least the ones we currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability. 

Of course, the AI programs themselves are not the responsible actors; instead, they are technologies designed, deployed and by human beings that have effects on other human beings. The people who design, deploy, and use AI are the real parties in interest.

We can think of AI programs as acting on behalf of human beings. In this sense AI programs are like agents that lack intentions but that create risks of harm to people. Hence the law of AI is the law of risky agents without intentions.

The law should hold these risky agents to objective standards of behavior, which are familiar in many different parts of the law. These legal standards ascribe intentions to actors—for example, that given the state of their knowledge, actors are presumed to intend the reasonable and foreseeable consequences of their actions. Or legal doctrines may hold actors to objective standards of conduct, for example, a duty of reasonable care or strict liability.

Holding AI agents to objective standards of behavior, in turn, means holding the people and organizations that implement these technologies to objective standards of care and requirements of reasonable reduction of risk.

Take defamation law. Mens rea requirements like the actual malice rule protect human liberty and prevent chilling people’s discussion of public issues. But these concerns do not apply to AI programs, which do not exercise human liberty and cannot be chilled. The proper analogy is not to a negligent or reckless journalist but to a defectively designed product—produced by many people in a chain of production—that causes injury to a consumer. The law can give the different players in the chain of production incentives to mitigate AI-created risks.

In copyright law, we should think of AI systems as risky agents that create pervasive risks of copyright infringement at scale. The law should require that AI companies take a series of reasonable steps that reduce the risk of copyright infringement even if they cannot completely eliminate it. A fair use defense tied to these requirements is akin to a safe harbor rule. Instead of litigating in each case whether a particular output of a particular AI prompt violated copyright, this approach asks whether the AI company has put sufficient efforts into risk reduction. If it has, its practices constitute fair use.

These examples suggest why AI systems may require changes in many different areas of the law. But we should always view AI technology in terms of the people and companies that design, deploy, offer and use it. To properly regulate AI, we need to keep our focus on the human beings behind it.

Keywords: AI, Artificial Intelligence, Risk, Agents, Objective Standards, Negligence, Products Liability, Freedom of Speech, Defamation, Copyright

Suggested Citation

Ayres, Ian and Balkin, Jack M., The Law of AI is the Law of Risky Agents without Intentions (June 01, 2024). University of Chicago Law Review Online (forthcoming 2024), Yale Law & Economics Research Paper, Yale Law School, Public Law Research Paper, Available at SSRN: https://ssrn.com/abstract=4862025 or http://dx.doi.org/10.2139/ssrn.4862025

Ian Ayres

Yale University - Yale Law School ( email )

P.O. Box 208215
New Haven, CT 06520-8215
United States
203-432-7101 (Phone)
203-432-2592 (Fax)

Yale University - Yale School of Management

135 Prospect Street
P.O. Box 208200
New Haven, CT 06520-8200
United States

Jack M. Balkin (Contact Author)

Yale University - Law School ( email )

P.O. Box 208215
New Haven, CT 06520-8215
United States
203-432-1620 (Phone)

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
1,005
Abstract Views
2,623
Rank
43,822
PlumX Metrics