Minds, Machines, and the Law: The Case of Volition in Copyright Law
25 Pages Posted: 9 Jun 2019
Date Written: May 22, 2019
With the increasing prevalence of ever more sophisticated technology — which permits machines to stand in for or augment humans in a growing number of contexts — the questions of whether, when, and how the so-called actions of machines can and should result in legal liability will become more practically pressing. Although the law has yet to fully grapple with questions like whether machines are (or can be) sufficiently human-like to be the subjects of law, philosophers have long contemplated the nature of machines. Philosophers have considered, for instance, whether human cognition is fundamentally computation — such that it is in principle possible for future artificial intelligences (AI) to possess the properties of human minds, including consciousness, semantic understanding, intention, and even moral responsibilit y — or if humans and machines are instead fundamentally different, no matter how sophisticated AI becomes. It is thus unsurprising that, in thinking through how the future of the law should accommodate and govern an AI-filled world, the lessons and frameworks to be gleaned from these philosophical discussions will have undeniable relevance.
One important set of questions that the law will inevitably need to confront is whether machines can have mental states, or — at least — something sufficiently like mental states for the purposes of the law. This is because a wide number of areas of law have explicit or implicit mental-state requirements for the incurrence of legal liability. Consider, for example, questions of intent and recklessness versus negligence in tort law; mens rea and actus reus in criminal law; offer and acceptance in contract law; and, as we will see, infringement and authorship in copyright law. In each of these contexts, the law either implicitly or explicitly asks for the presence of some particular mental state on the part of the actors in question. Whether the operations of machines can incur legal liability — and what kind of liability they can incur — thus often turns on whether a machine is to be regarded as operating with the mental state required.
In some contexts, the decision already seems to have been made that machines can never possess the mental states required for liability. Consider copyright law’s volitional-act requirement for infringement. Copyright law has generally claimed that machines making copies of protected material lack the requisite volition for this conduct to give rise to legal liability on the part of those responsible for the machine, even when the machine has been designed to make copies, often of copyrighted works. In other contexts, such as criminal and tort law, the question of machines’ capacity for mental states remains open and underexplored.
We aim in this Essay to challenge any hasty and blanket generalization that machines cannot have mental states as a legal matter, drawing on philosophical thinking surrounding mental states and using copyright’s volitional-act requirement as our case study. In so doing, we conclude that — as a matter of copyright doctrine — a copying technology might be sufficiently “volitional” for the technology provider to be held directly liable for the technology’s so-called actions in producing copies; and — as a matter of general legal theory — machines in some contexts might be capable of being sufficiently “mental” to count as agents of the humans behind them, depending on the aims of the area of law in question. Our conclusion is thus not merely of philosophical interest, but one with practical implications for determinations of legal liability. In the context of copyright law, our chosen case study, our conclusion has implications for who is and is not directly accountable for the copying of protected material, and for the law’s ability to effectuate its goals of encouraging the creation and dissemination of expressive works.
To mount our challenge, after giving an overview of mental states in the law and the puzzle raised by technological advancement, we recount two of the most influential philosophical discussions on minds and machines, and the resulting theoretical distinction between the conscious and functional properties of mental states. Using this distinction as a framework, we argue that it is an open question whether the law’s mental-state requirements seek to track the conscious or merely functional properties of the particular mental state in question, one that depends on the ultimate aims of the relevant area of law. We then defend the view that copyright law’s volitional-act requirement might be interested in merely functional properties, which could — in principle — be replicated by machines. Next, we consider which functional properties copyright law might seek to track and what a machine might have to look like to be “functionally volitional” under copyright law, to count as the technology provider’s agent, and to thereby give rise to direct liability. These relevant functional properties include the ability to pause and analyze the nature of the work in question before “choosing” to undertake an act of copying, one which might cause exposure to liability. On the basis of this framework, we conclude that machines with the appropriate functionality might satisfy copyright law’s volitional-act requirement, thus forming the basis of holding the technology providers directly liable for infringement. Finally, generalizing our framework, we offer preliminary thoughts on machines and mental-state requirements in the contrasting contexts of criminal law and copyright authorship doctrine, as well as a general hypothesis regarding when the law is interested in conscious versus merely functional properties of the mental states in question.
Keywords: artificial intelligence, AI, copyright, volition, machines, technology, consciousness, mental states, liability, infringement, John Searle, David Chalmers, Chinese Room
Suggested Citation: Suggested Citation