Danger Ahead: Risk Assessment and the Future of Bail Reform
Posted: 27 Sep 2017 Last revised: 20 Feb 2018
Date Written: February 19, 2018
In the last five years, legislators in all fifty states and many localities have made changes to their pretrial justice systems. Reform efforts aim to shrink jails by incarcerating fewer poor, low-risk defendants, and particularly, fewer racial minorities. Many jurisdictions are embracing pretrial risk assessment instruments - statistical tools that use historical data to forecast which defendants can safely be released - as a centerpiece of these changes. Scholars, system practitioners, advocates, and journalists are increasingly questioning the extent to which pretrial risk assessment instruments actually serve these goals. Existing scholarship and debate centers on how the instruments may reinforce racial disparities, and on how their opaque algorithms may frustrate due process interests.
In this Article, we highlight three underlying challenges that have yet to receive the attention they require. First, today's risk assessment tools make what we term "zombie predictions." That is, the predictive models are trained on data from older bail regimes, and are blind to the risk-reducing benefits of recent bail reforms. This will lead to predictions that systematically overestimate risk. Second, the "decision-making frameworks" that mediate the court system's use of risk estimates embody crucial moral judgments, yet currently escape public scrutiny. Third, in the longer term, these new tools risk giving an imprimatur of scientific objectivity to ill-defined concepts of "dangerousness"; pave the way for a possible increase in preventive detention; and may entrench the Supreme Court's historically recent blessing of preventive detention for dangerousness.
We propose two vital steps that should be seen as minimally necessary to address these challenges. First, where they choose to embrace risk assessment, jurisdictions must carefully define what they wish to predict; must gather and use local, recent data; and must continuously update and calibrate any model on which they choose to rely, investing in data infrastructure where necessary to meet these goals. Second, instruments and frameworks must be subject to strong, inclusive governance.
Keywords: Bail, Risk Assessment, Algorithms, Machine Learning, Civil Rights
Suggested Citation: Suggested Citation