When Does Physician Use of AI Increase Liability?

62(1) Journal of Nuclear Medicine 17-21 (2021)

UCLA School of Law, Law-Econ Research Paper No. 21-01

12 Pages Posted: 14 Jan 2021 Last revised: 14 Jan 2021

See all articles by Kevin Tobia

Kevin Tobia

Georgetown University Law Center; Georgetown University - Department of Philosophy

Aileen Nielsen

ETH Zurich, Center for Law and Economics

Alexander Stremitzer

ETH Zurich

Date Written: January 1, 2021

Abstract

An increasing number of automated and artificially intelligent (AI) systems make medical treatment recommendations, including “personalized” recommendations, which can deviate from standard care. Legal scholars argue that following such nonstandard treatment recommendations will increase liability in medical malpractice, undermining the use of potentially beneficial medical AI. However, such liability depends in part on lay judgments by jurors: When physicians use AI systems, in which circumstances would jurors hold physicians liable? To determine potential jurors’ judgments of liability, we conducted an online experimental study of a nationally representative sample of 2,000 U.S. adults. Each participant read one of four scenarios in which an AI system provides a treatment recommendation to a physician. The scenarios varied the AI recommendation (standard or nonstandard care) and the physician’s decision (to accept or reject that recommendation). Subsequently, the physician’s decision caused a harm. Participants then assessed the physician’s liability. Our results indicate that physicians who receive advice from an AI system to provide standard care can reduce the risk of liability by accepting, rather than rejecting, that advice, all else equal. However, when an AI system recommends nonstandard care, there is no similar shielding effect of rejecting that advice and so providing standard care. The tort law system is unlikely to undermine the use of AI precision medicine tools and may even encourage the use of these tools.

Keywords: Oncology, Artificial intelligence, liability, precision medicine, experimental jurisprudence

Suggested Citation

Tobia, Kevin and Nielsen, Aileen and Stremitzer, Alexander, When Does Physician Use of AI Increase Liability? (January 1, 2021). 62(1) Journal of Nuclear Medicine 17-21 (2021), UCLA School of Law, Law-Econ Research Paper No. 21-01, Available at SSRN: https://ssrn.com/abstract=3739441

Kevin Tobia (Contact Author)

Georgetown University Law Center ( email )

600 New Jersey Avenue, NW
Washington, DC 20001
United States

HOME PAGE: http://www.law.georgetown.edu/faculty/kevin-tobia/

Georgetown University - Department of Philosophy

37th and O Streets, N.W.
Washington, DC 20007
United States

Aileen Nielsen

ETH Zurich, Center for Law and Economics ( email )

LEE G104
Leonhardstrasse 21
Zurich
Switzerland

Alexander Stremitzer

ETH Zurich ( email )

Haldeneggsteig 4
Zurich, Zurich 8092
Switzerland

HOME PAGE: http://https://laweconbusiness.ethz.ch/group/professor/stremitzer.html

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
252
Abstract Views
1,072
Rank
241,090
PlumX Metrics