The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice

55 Pages Posted: 22 Nov 2022 Last revised: 17 Feb 2023

See all articles by Brandon L. Garrett

Brandon L. Garrett

Duke University School of Law

Cynthia Rudin

Duke University - Department of Computer Science

Date Written: February 16, 2023


Artificial intelligence (AI) increasingly is used to make important decisions that affect individuals and society. As governments and corporations use AI more and more pervasively, one of the most troubling trends is that developers so often design it to be a “black box.” Designers create models too complex for people to understand or they conceal how AI functions. Policymakers and the public increasingly sound alarms about black box AI, and a particularly pressing area of concern has been in criminal cases, in which a person’s life, liberty, and public safety can be at stake. In the United States and globally, despite concerns that technology may deepen pre-existing racial disparities and overreliance on incarceration, black box AI has proliferated in areas such as: DNA mixture interpretation; facial recognition; recidivism risk assessments; and predictive policing. Despite constitutional criminal procedure protections, judges have largely embraced claims that AI should remain undisclosed.

Both champions and critics of AI, however, mistakenly assume that we inevitably face a central trade-off: black box AI may be incomprehensible, but it performs more accurately. But that is not so. In this Article, we question the basis for this assertion, which has so powerfully affected judges, policymakers, and academics. We describe a mature body of computer science research showing how “glass box” AI—designed to be fully interpretable by people—can be more accurate than the black box alternatives. Indeed, black box AI performs predictably worse in settings like the criminal system. After all, criminal justice data is notoriously error prone, and it also may reflects pre-existing racial and socio-economic disparities. Unless AI is interpretable, decisionmakers like lawyers and judges who must use it will not be able to it.

Debunking the black box performance myth has implications for constitutional criminal procedure rights and legislative policy. Judges and lawmakers have been reluctant to impair the perceived effectiveness of black box AI by requiring disclosures to the defense. Absent some compelling—or even credible—government interest in keeping AI black box, and given substantial constitutional rights and public safety interests at stake, we argue that the burden rests on the government to justify any departure from the norm that all lawyers, judges, and jurors can fully understand AI. If AI is to be used at all in settings like the criminal system—and we do not suggest that it necessarily should—the bottom line is that glass box AI can better accomplish both fairness and public safety goals. We conclude by calling for national and local regulation to safeguard, in all criminal cases, the right to glass box AI.

Keywords: Artificial intelligence, interpretability, due process, risk assessments, criminal procedure, explainability, facial recognition, prediction, public safety, validation

Suggested Citation

Garrett, Brandon L. and Rudin, Cynthia, The Right to a Glass Box: Rethinking the Use of Artificial Intelligence in Criminal Justice (February 16, 2023). Cornell Law Review, Forthcoming, Duke Law School Public Law & Legal Theory Series No. 2023-03, Available at SSRN: or

Brandon L. Garrett (Contact Author)

Duke University School of Law ( email )

210 Science Drive
Box 90362
Durham, NC 27708
United States
919-613-7090 (Phone)


Cynthia Rudin

Duke University - Department of Computer Science ( email )

LSRC Building
Durham, NC 27708-0204
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics