Synthetic Crowdsourcing: A Machine-Learning Approach to Inconsistency in Adjudication
35 Pages Posted: 28 Nov 2015 Last revised: 7 Dec 2017
Date Written: December 6, 2017
Abstract
The problem of inconsistent legal and administrative decision making is widespread and well documented. We argue that predictive models of collective decisions can be used to guide and regulate the decisions of individual adjudicators. This "synthetic crowdsourcing" approach simulates a world in which all judges cast multiple independent votes in every case. Synthetic crowdsourcing can extend the core benefits of en banc decision making to all cases while avoiding the dangers of group think. Similar to decision matrices such as the Federal Sentencing Guidelines, synthetic crowdsourcing uses statistical patterns in historical decisions to guide future decisions. But unlike traditional approaches, it leverages machine learning to optimally tailor that guidance, allowing for substantial improvements in the consistency and overall quality of decisions. We illustrate synthetic crowdsourcing with an original dataset built using text processing of transcripts from California parole suitability hearings.
Keywords: Judicial Decision Making, Machine Learning, Prediction, Forecasting, Judgmental Bootstrapping, Policy Capture, Inconsistency, Criminal Justice, Judgment and Decision Making, Parole
Suggested Citation: Suggested Citation