The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice

54 Pages Posted: 22 Nov 2022 Last revised: 1 Feb 2023

See all articles by Brandon L. Garrett

Brandon L. Garrett

Duke University School of Law

Cynthia Rudin

Duke University - Department of Computer Science

Date Written: December 1, 2022

Abstract

Artificial intelligence (AI) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more and more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI, and a particularly pressing area of concern has been in criminal cases, in which a person’s life, liberty, and public safety can be at stake. In the United States and globally, despite concerns that technology may deepen pre-existing racial disparities and overreliance on incarceration, black box AI has proliferated in areas such as: DNA mixture interpretation; facial recognition; recidivism risk assessments; and predictive policing. Despite constitutional criminal procedure protections, judges have largely embraced claims that AI should remain undisclosed.

The champions and critics of AI have something in common, where both sides argue that we face a central trade-off: black box AI may be incomprehensible, but it performs more accurately. In this Article, we question the basis for this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, it reflects pre-existing racial and socio-economic disparities, and any AI system must be used by decisionmakers like lawyers and judges—who must understand it.

Debunking the black box performance myth has implications for constitutional criminal procedure rights and legislative policy. Judges and lawmakers have been reluctant to impair the perceived effectiveness of black box AI by requiring disclosures to the defense. Absent some compelling—or even credible—government interest in keeping AI black box, and given substantial constitutional rights and public safety interests at stake, we argue that the burden rests on the government to justify any departure from the norm that all lawyers, judges, and jurors can fully understand AI. If AI is to be used at all in settings like the criminal system—and we do not suggest that it necessarily should—the bottom line is that glass box AI can better accomplish both fairness and public safety goals. We conclude by calling for national and local regulation to safeguard, in all criminal cases, the right to glass box AI.

Keywords: Artificial intelligence, interpretability, due process, risk assessments, criminal procedure, explainability, facial recognition, prediction, public safety, validation

Suggested Citation

Garrett, Brandon L. and Rudin, Cynthia, The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice (December 1, 2022). Duke Law School Public Law & Legal Theory Series No. 2023-03, Available at SSRN: https://ssrn.com/abstract=4275661 or http://dx.doi.org/10.2139/ssrn.4275661

Brandon L. Garrett (Contact Author)

Duke University School of Law ( email )

210 Science Drive
Box 90362
Durham, NC 27708
United States
919-613-7090 (Phone)

HOME PAGE: http://www.brandonlgarrett.com/

Cynthia Rudin

Duke University - Department of Computer Science ( email )

LSRC Building
Durham, NC 27708-0204
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
134
Abstract Views
577
Rank
320,613
PlumX Metrics