Synthetic Crowdsourcing: A Machine-Learning Approach to Inconsistency in Adjudication

35 Pages Posted: 28 Nov 2015 Last revised: 7 Dec 2017

Hannah Laqueur

University of California, Davis

Ryan Copus

Harvard Law School

Date Written: December 6, 2017


The problem of inconsistent legal and administrative decision making is widespread and well documented. We argue that predictive models of collective decisions can be used to guide and regulate the decisions of individual adjudicators. This "synthetic crowdsourcing" approach simulates a world in which all judges cast multiple independent votes in every case. Synthetic crowdsourcing can extend the core benefits of en banc decision making to all cases while avoiding the dangers of group think. Similar to decision matrices such as the Federal Sentencing Guidelines, synthetic crowdsourcing uses statistical patterns in historical decisions to guide future decisions. But unlike traditional approaches, it leverages machine learning to optimally tailor that guidance, allowing for substantial improvements in the consistency and overall quality of decisions. We illustrate synthetic crowdsourcing with an original dataset built using text processing of transcripts from California parole suitability hearings.

Keywords: Judicial Decision Making, Machine Learning, Prediction, Forecasting, Judgmental Bootstrapping, Policy Capture, Inconsistency, Criminal Justice, Judgment and Decision Making, Parole

Suggested Citation

Laqueur, Hannah and Copus, Ryan, Synthetic Crowdsourcing: A Machine-Learning Approach to Inconsistency in Adjudication (December 6, 2017). Available at SSRN: or

Hannah Laqueur

University of California, Davis ( email )

One Shields Avenue
Davis, CA 95616
United States
9173642301 (Phone)

Ryan Copus (Contact Author)

Harvard Law School ( email )

1575 Massachusetts
Hauser 406
Cambridge, MA 02138
United States

Register to support our free research


Paper statistics

Abstract Views