The Flaws of Policies Requiring Human Oversight of Government Algorithms

Computer Law & Security Review, Volume 45, 2022

39 Pages Posted: 13 Sep 2021 Last revised: 26 Apr 2022

See all articles by Ben Green

Ben Green

University of Michigan at Ann Arbor - Society of Fellows; University of Michigan at Ann Arbor - Gerald R. Ford School of Public Policy; Harvard University - Berkman Klein Center for Internet & Society

Date Written: April 26, 2022

Abstract

As algorithms become an influential component of government decision-making around the world, policymakers have debated how governments can attain the benefits of algorithms while preventing the harms of algorithms. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. Despite the widespread turn to human oversight, these policies rest on an uninterrogated assumption: that people are able to effectively oversee algorithmic decision-making. In this article, I survey 41 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a shift from human oversight to institutional oversight as the central mechanism for regulating government algorithms. This institutional approach operates in two stages. First, agencies must justify that it is appropriate to incorporate an algorithm into decision-making and that any proposed forms of human oversight are supported by empirical evidence. Second, these justifications must receive democratic public review and approval before the agency can adopt the algorithm.

Keywords: Human oversight; Human in the loop; Algorithmic governance; Automated decision-making; Artificial intelligence; AI regulation; Human-algorithm interactions

Suggested Citation

Green, Ben, The Flaws of Policies Requiring Human Oversight of Government Algorithms (April 26, 2022). Computer Law & Security Review, Volume 45, 2022, Available at SSRN: https://ssrn.com/abstract=3921216 or http://dx.doi.org/10.2139/ssrn.3921216

Ben Green (Contact Author)

University of Michigan at Ann Arbor - Society of Fellows ( email )

Ann Arbor, MI
United States

University of Michigan at Ann Arbor - Gerald R. Ford School of Public Policy ( email )

735 South State Street, Weill Hall
Ann Arbor, MI 48109
United States

Harvard University - Berkman Klein Center for Internet & Society ( email )

Harvard Law School
23 Everett, 2nd Floor
Cambridge, MA 02138
United States

Do you want regular updates from SSRN on Twitter?

Paper statistics

Downloads
1,115
Abstract Views
5,339
rank
26,813
PlumX Metrics