The Institutional Life of Algorithmic Risk Assessment
30 Pages Posted: 24 Jun 2019 Last revised: 22 Jul 2019
Date Written: June 18, 2019
As states nationwide increasingly turn to risk assessment algorithms as tools for criminal justice reform, scholars and civil society actors alike are increasingly warning that this technological turn comes with complications. Research to date tends to focus on fairness, accountability, and transparency within algorithmic tools. Although attention to whether these instruments are fair or biased is normatively essential, this Article contends that this inquiry cannot be the whole conversation. Looking at issues such as fairness or bias in a tool in isolation elides vital bigger-picture considerations about the institutions and political systems within which tools are developed and deployed. Using California’s Money Bail Reform Act of 2017 (SB 10) as an example, this Article analyzes how risk assessment statutes create frameworks within which policymakers and technical actors are constrained and empowered when it comes to the design and implementation of a particular instrument. Specifically, it focuses on the tension between, on one hand, a top-down, global understanding of fairness, accuracy, and lack of bias and, on the other, a tool that is well-tailored to local considerations. It explores three potential technical and associated policy consequences of SB 10’s framework: proxies, Simpson’s paradox, and thresholding. And it calls for greater attention to the design of risk assessment statutes and their allocation of global and local authority.
Keywords: Machine Learning, Technology, Law, Criminal Justice, Algorithm, Risk Assessment
Suggested Citation: Suggested Citation