Defining Autonomy in the Context of Tort Liability: Is Machine Learning Indicative of Robotic Responsibility?

Posted: 24 Feb 2016

See all articles by Katherine Sheriff

Katherine Sheriff

Emory University, School of Law, Students

Date Written: December 12, 2015

Abstract

Traditional tort law benefits consumers by holding accountable parties responsible for injury, encouraging greater care in manufacture, and, ultimately, making injured victims whole. Whether traditional notions of legal responsibility comport with the advent of Artificial Intelligence is the sweetheart of academic research. However, less attention is given to the increasingly limited role traditional notions of learning patterns and brain functionality play in the changing landscape of robotic service products. Neuroimaging, organizational psychology, and systemic risk show decision making does not occur as traditionally portrayed. But rather, the human brain is not the ideal analogue to “machine learning.” Robots “learn” by amassing recognition for relevant data and “decide” by calculating the probability of a desired outcome based on the input received, as applied in numerous permutations of a given function.

Bypassing whether robots can be liable, this Paper focuses on the extent to which machine learning heightens robotic accountability, and asks, at what point ought the law hold robots liable because the decision creating the harm was not a function of software programming on the front end, but a function of robotic choice? This Paper recommends a variation of Ugo Pagallo’s “digital peculium” liability scheme for “hard cases” – where fully autonomous robots make decisions absent appropriate linkage to the original programmer and, thus, fall outside the scope of pre-programmed uncertainty. Situating Pagallo’s “hard cases” in the larger abstraction laid out by H.L.A. Hart and Ronald Dworkin, this Paper concludes by considering whether determination of a right answer, or conclusive indetermination of any, exists for application of legal accountability to ever-increasing robotic autonomy.

Suggested Citation

Sheriff, Katherine, Defining Autonomy in the Context of Tort Liability: Is Machine Learning Indicative of Robotic Responsibility? (December 12, 2015). Available at SSRN: https://ssrn.com/abstract=2735945 or http://dx.doi.org/10.2139/ssrn.2735945

Katherine Sheriff (Contact Author)

Emory University, School of Law, Students ( email )

Atlanta, GA
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
2,244
PlumX Metrics