Beyond State v. Loomis: Artificial Intelligence, Government Algorithmization, and Accountability
International Journal of Law and Information Technology, Vol. 27, Issue 2, pp.122-141 (2019).
24 Pages Posted: 20 Jan 2019 Last revised: 16 Jan 2020
Date Written: December 20, 2018
Developments in data analytics, computational power, and machine learning techniques have driven all branches of the government to outsource authority to machines in performing public functions — social welfare, law enforcement, and most importantly, courts. Complex statistical algorithms and artificial intelligence (AI) tools are being used to automate decision-making and are having a significant impact on individuals’ rights and obligations. Controversies have emerged regarding the opaque nature of such schemes, the unintentional bias against and harm to underrepresented populations, and the broader legal, social, and ethical ramifications. State v. Loomis, a recent case in the United States, well demonstrates how unrestrained and unchecked outsourcing of public power to machines may undermine human rights and the rule of law. With a close examination of the case, this Article unpacks the issues of the ‘legal black box’ and the ‘technical black box’ to identify the risks posed by rampant ‘algorithmization’ of government functions to due process, equal protection, and transparency. We further assess some important governance proposals and suggest ways for improving the accountability of AI-facilitated decisions. As AI systems are commonly employed in consequential settings across jurisdictions, technologically-informed governance models are needed to locate optimal institutional designs that strike a balance between the benefits and costs of algorithmization.
Keywords: State v. Loomis, Artificial Intelligence (AI), Algorithms, Black Box, Human Rights, Rule of Law, Accountability
Suggested Citation: Suggested Citation