An FDA for Algorithms
41 Pages Posted: 15 Mar 2016 Last revised: 20 Apr 2017
Date Written: March 15, 2016
The rise of increasingly complex algorithms calls for critical thought about how best to prevent, deter, and compensate for the harms that they cause. This paper argues that the criminal law and tort regulatory systems will prove no match for the difficult regulatory puzzles algorithms pose. Algorithmic regulation will require federal uniformity, expert judgment, political independence, and pre-market review to prevent - without stifling innovation - the introduction of unacceptably dangerous algorithms into the market. This paper proposes that a new specialist regulatory agency should be created to regulate algorithmic safety. An FDA for algorithms.
Such a federal consumer protection agency should have three powers. First, it should have the power to organize and classify algorithms into regulatory categories by their design, complexity, and potential for harm (in both ordinary use and through misuse). Second, it should have the power to prevent the introduction of algorithms into the market until their safety and efficacy has been proven through evidence-based pre-market trials. Third, the agency should have broad authority to impose disclosure requirements and usage restrictions to prevent algorithms’ harmful misuse.
To explain why a federal agency will be necessary, this paper proceeds in three parts. First, it explains the diversity of algorithms that already exist and that are soon to come. In the future many algorithms will be “trained,” not “designed.” That means that the operation of many algorithms will be opaque and difficult to predict in border cases, and responsibility for their harms will be diffuse and difficult to assign. Moreover, although “designed” algorithms already play important roles in many life-or-death situations (from emergency landings to automated braking systems), increasingly “trained” algorithms will be deployed in these mission-critical applications.
Second, this paper explains why other possible regulatory schemes - such as state tort and criminal law or regulation through subject-matter regulatory agencies - will not be as desirable as the creation of a centralized federal regulatory agency for the administration of algorithms as a category. For consumers, tort and criminal law are unlikely to efficiently counter the harms from algorithms. Harms traceable to algorithms may frequently be diffuse and difficult to detect. Human responsibility and liability for such harms will be difficult to establish. And narrowly tailored usage restrictions may be difficult to enforce through indirect regulation. For innovators, the availability of federal preemption from local and ex-post liability is likely to be desired.
Third, this paper explains that the concerns driving the regulation of food, drugs, and cosmetics closely resemble the concerns that should drive the regulation of algorithms. With respect to the operation of many drugs, the precise mechanisms by which they produce their benefits and harms are not well understood. The same will soon be true of many of the most important (and potentially dangerous) future algorithms. Drawing on lessons from the fitful growth and development of the FDA, the paper proposes that the FDA’s regulatory scheme is an appropriate model from which to design an agency charged with algorithmic regulation.
The paper closes by emphasizing the need to think proactively about the potential dangers algorithms pose. The United States created the FDA and expanded its regulatory reach only after several serious tragedies revealed its necessity. If we fail to anticipate the trajectory of modern algorithmic technology, history may repeat itself.
Suggested Citation: Suggested Citation