When AIs Outperform Doctors: The Dangers of a Tort-Induced Over-Reliance on Machine Learning and What (Not) to Do About it

57 Pages Posted: 13 Feb 2018  

A. Michael Froomkin

University of Miami - School of Law

Ian R. Kerr

University of Ottawa - Common Law Section

Joelle Pineau

McGill University

Date Written: January 31, 2018

Abstract

Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the longer run—for the quality of medical diagnostics itself?

This article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. In time, effective machine learning could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment also. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decision scenarios that are not easily audited or understood by human doctors. Given the well-documented fact that treatment strategies are often not as effective when deployed in real clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in quality of care. The article describes salient technical aspects of this scenario particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules in order to avoid a machine-only diagnostic regime. We argue that the appropriate revision to the standard of care requires the maintenance of meaningful participation by physicians in the loop.

Keywords: AI, Tort Law, Machine Learning, AI Policy, Medicine, Medical Malpractice Law, Health Law, Health Policy, Medical Legal Studies, Diagnosis, Neural Networks, Doctors

JEL Classification: K13

Suggested Citation

Froomkin, A. Michael and Kerr, Ian R. and Pineau, Joelle, When AIs Outperform Doctors: The Dangers of a Tort-Induced Over-Reliance on Machine Learning and What (Not) to Do About it (January 31, 2018). University of Miami Legal Studies Research Paper No. 18-3. Available at SSRN: https://ssrn.com/abstract=3114347 or http://dx.doi.org/10.2139/ssrn.3114347

A. Michael Froomkin (Contact Author)

University of Miami - School of Law ( email )

P.O. Box 248087
Coral Gables, FL 33146
United States
305-284-4285 (Phone)
305-284-6506 (Fax)

Ian R. Kerr

University of Ottawa - Common Law Section ( email )

57 Louis Pasteur Street
Ottawa, K1N 6N5
Canada
613-562-5800 (Phone)

Joelle Pineau

McGill University ( email )

Register to save articles to
your library

Register

Paper statistics

Downloads
392
rank
68,390
Abstract Views
2,714
PlumX