Preempting the Artificially Intelligent Machine

71 Pages Posted: 4 Sep 2019 Last revised: 14 Mar 2020

See all articles by Charlotte Tschider

Charlotte Tschider

Loyola University Chicago School of Law

Date Written: August 27, 2019

Abstract

The medical device industry and new technology start-ups have dramatically increased investment in artificial intelligence (AI) applications, including diagnostic tools and AI-enabled devices. These technologies have been positioned as reducing climbing health costs while simultaneously improving health outcomes. Technologies like AI-enabled surgical robots, AI-enabled insulin pumps, and cancer detection applications hold tremendous promise, yet without appropriate oversight will likely pose major safety issues. While preventative safety measures may reduce risk to patients using these technologies, effective regulatory-tort regimes also permit recovery when preventative solutions are insufficient.

The Food and Drug Administration (FDA), the administrative agency responsible overseeing the safety and efficacy of medical devices, has not effectively addressed AI system safety issues for its clearance processes. If the FDA cannot reasonably reduce the risk of injury for AI-enabled medical devices, injured patients should be able to rely on ex post recovery options, as in products liability cases. However, the Medical Device Amendments Act (MDA) of 1976 introduced an express preemption clause that the U.S. Supreme Court has interpreted to nearly foreclose liability claims, based almost completely on the comprehensiveness of FDA clearance review processes. At its inception, MDA preemption aimed to balance consumer interests in safe medical devices with efficient, consistent regulation to promote innovation and reduce costs.

Although preemption remains an important mechanism for balancing injury risks with device availability, the introduction of AI software dramatically changes the risk profile for medical devices. Due to the inherent opacity and changeability of AI algorithms powering AI machines, it is nearly impossible to predict all potential safety hazards a faulty AI system might pose to patients. This article identifies key preemption issues for AI machines as they affect ex ante and ex post regulatory-tort allocation, including actual FDA review for parallel claims, bifurcation of software and device reviews, and dynamics of the technology itself that may enable plaintiffs to avoid preemption. This Author then recommends an alternative conception of the regulatory-tort allocation for AI machines that will create a more comprehensive and complementary safety and compensatory model.

Keywords: FDA, AI, artificial intelligence, medical device, preemption, tort law, administrative law

Suggested Citation

Tschider, Charlotte, Preempting the Artificially Intelligent Machine (August 27, 2019). Brigham Young University Law Review, Forthcoming, Available at SSRN: https://ssrn.com/abstract=3443987 or http://dx.doi.org/10.2139/ssrn.3443987

Charlotte Tschider (Contact Author)

Loyola University Chicago School of Law ( email )

25 E. Pearson
Chicago, IL 60611
United States

Here is the Coronavirus
related research on SSRN

Paper statistics

Downloads
110
Abstract Views
762
rank
283,071
PlumX Metrics