Negligence and AI's Human Users

58 Pages Posted: 2 Apr 2019

See all articles by Andrew D. Selbst

Andrew D. Selbst

Data & Society Research Institute; Yale Information Society Project

Date Written: March 11, 2019

Abstract

Negligence law is often asked to adapt to new technologies. So it is with artificial intelligence (AI). But AI is different. Drawing on examples in medicine, financial advice, data security, and driving in semi-autonomous vehicles, this Article argues that AI poses serious challenges for negligence law. By inserting a layer of inscrutable, unintuitive, and statistically-derived code in between a human decisionmaker and the consequences of that decision, AI disrupts our typical understanding of responsibility for choices gone wrong. The Article argues that AI’s unique nature introduces four complications into negligence: 1) unforeseeability of specific errors that AI will make; 2) capacity limitations when humans interact with AI; 3) introducing AI-specific software vulnerabilities into decisions not previously mediated by software; and 4) distributional concerns based on AI’s statistical nature and potential for bias.

Tort scholars have mostly overlooked these challenges. This is understandable because they have been focused on autonomous robots, especially autonomous vehicles, which can easily kill, maim, or injure people. But this focus has neglected to consider the full range of what AI is. Outside of robots, AI technologies are not autonomous. Rather, they are primarily decision-assistance tools that aim to improve on the inefficiency, arbitrariness, and bias of human decisions. By focusing on a technology that eliminates users, tort scholars have concerned themselves with product liability and innovation, and as a result, have missed the implications for negligence law, the governing regime when harm comes from users of AI.

The Article also situates these observations in broader themes of negligence law: the relationship between bounded rationality and foreseeability, the need to update reasonableness conceptions based on new technology, and the difficulties of merging statistical facts with individual determinations, such as fault. This analysis suggests that though there might be a way to create systems of regulatory support to allow negligence law to operate as intended, an approach to oversight that it not based in individual fault is likely to be a more fruitful approach.

Keywords: tort, negligence, artificial intelligence, machine learning

Suggested Citation

Selbst, Andrew D., Negligence and AI's Human Users (March 11, 2019). Boston University Law Review, Forthcoming. Available at SSRN: https://ssrn.com/abstract=

Andrew D. Selbst (Contact Author)

Data & Society Research Institute ( email )

36 West 20th Street
11th Floor
New York,, NY 10011
United States

Yale Information Society Project ( email )

127 Wall Street
New Haven, CT 06511
United States

Register to save articles to
your library

Register

Paper statistics

Downloads
517
Abstract Views
2,897
rank
134,582
PlumX Metrics