Robot Ipsa Loquitur

67 Pages Posted: 24 Feb 2019 Last revised: 28 Feb 2019

Date Written: January 20, 2019

Abstract

Accidents are becoming automated. From self-driving cars to self-flying drones, robots are increasingly colliding with the world. And one of the most pressing questions raised by these technologies — indeed, one of the great regulatory challenges of the coming era — is how the law should account for crashes involving such complex automated systems. By now, many have weighed in, including the field’s luminaries. And, though responses vary, a tentative consensus has emerged on at least one front. Age-old negligence liability is essentially a nonstarter. For negligence requires a showing of fault. And in a world where vexingly complex robots roam, how could one possibly find the needle of negligence in a haystack comprised of millions of lines of computer code?

This Article challenges that view. In sharp contrast to the prevailing wisdom, it argues that widespread debates over the so-called ‘vexing tort problems’ raised by modern robots have overlooked a crucial issue: inference. Negligence, after all, needn’t be shown by pointing directly to a faulty line of code. Like all facts, it can be proven indirectly through circumstantial evidence. Indeed, as the ancient negligence rule of res ipsa loquitur makes plain, sometimes an accident can “speak for itself.”

Using the first robot accused of negligence as a case study, this Article shows how advanced telematics technologies in modern machines provide richly detailed records of accidents that, themselves, speak to the negligence of the parties involved. In doing so, it offers the first wide ranging account of how inference-based analysis can — and, in fact, already does — elegantly resolve liability determinations for otherwise confoundingly complex accidents. After showing that the purportedly novel challenges posed by robots are neither unprecedented, unresolvable, nor even unique to emerging technologies, the Article then takes a more practical turn. Drawing from a rich vein of precedent involving automated accidents, it outlines steps that courts, practitioners, and policymakers can take to streamline fault determinations using an approach it calls robot ipsa loquitur. With trillion-dollar markets and millions of lives on the line, it argues that drastic calls by leading experts to upend conventional liability are ahistorical, contrary to tort law’s fundamental goals, and unnecessary to protect the interests of accident victims. A simpler, more productive approach would let the robot speak for itself.

Keywords: Liability, Tort, Legal, Product, Negligence, Accident, Law, Regulation, AI, Machine, Learning, Artificial, Intelligence, Automated, Robot, Autonomous, Vehicle, Drone, Car, Driverless, Self-Driving, Tesla, Waymo, Uber, Cruisemo, Uber, Cruise

Suggested Citation

Casey, Bryan, Robot Ipsa Loquitur (January 20, 2019). Georgetown Law Journal, 2019. Available at SSRN: https://ssrn.com/abstract=3327673 or http://dx.doi.org/10.2139/ssrn.3327673

Bryan Casey (Contact Author)

Stanford Law School ( email )

559 Nathan Abbott Way
Stanford, CA 94305-8610
United States

Register to save articles to
your library

Register

Paper statistics

Downloads
86
rank
288,186
Abstract Views
364
PlumX Metrics