Adversarial Scrutiny of Evidentiary Statistical Software

Proceedings of the Conference on Fairness, Accountability, & Transparency (ACM FAccT), 2022

28 Pages Posted: 31 May 2022 Last revised: 23 Jun 2022

See all articles by Rediet Abebe

Rediet Abebe

Harvard University - Society of Fellows

Moritz Hardt

Max Planck Institute for Intelligent Systems

Angela Jin

University of California, Berkeley - Department of Electrical Engineering and Computer Sciences

John Miller

University of California, Berkeley

Ludwig Schmidt

University of Washington - Paul G. Allen School of Computer Science & Engineering

Rebecca Wexler

University of California, Berkeley, School of Law

Date Written: May 12, 2022

Abstract

The U.S. criminal legal system increasingly relies on software output to convict and incarcerate people. In a large number of cases each year, the government makes these consequential decisions based on evidence from statistical software—such as probabilistic genotyping, environmental audio detection, and toolmark analysis tools—that defense counsel cannot fully cross-examine or scrutinize. This undermines the commitments of the adversarial criminal legal system, which relies on the defense’s ability to probe and test the prosecution’s case to safeguard individual rights.

Responding to this need to adversarially scrutinize output from such software, we propose robust adversarial testing as an audit framework to examine the validity of evidentiary statistical software. We define and operationalize this notion of robust adversarial testing for defense use by drawing on a large body of recent work in robust machine learning and algorithmic fairness. We demonstrate how this framework both standardizes the process for scrutinizing such tools and empowers defense lawyers to examine their validity for instances most relevant to the case at hand. We further discuss existing structural and institutional challenges within the U.S. criminal legal system that may create barriers for implementing this and other such audit frameworks and close with a discussion on policy changes that could help address these concerns.

Suggested Citation

Abebe, Rediet and Hardt, Moritz and Jin, Angela and Miller, John and Schmidt, Ludwig and Wexler, Rebecca, Adversarial Scrutiny of Evidentiary Statistical Software (May 12, 2022). Proceedings of the Conference on Fairness, Accountability, & Transparency (ACM FAccT), 2022, Available at SSRN: https://ssrn.com/abstract=4107017 or http://dx.doi.org/10.2139/ssrn.4107017

Rediet Abebe

Harvard University - Society of Fellows ( email )

1875 Cambridge Street
Cambridge, MA 02138
United States

HOME PAGE: http://https://www.cs.cornell.edu/~red/

Moritz Hardt

Max Planck Institute for Intelligent Systems ( email )

Max-Planck-Ring 4
Tübingen, 72076
Germany

Angela Jin

University of California, Berkeley - Department of Electrical Engineering and Computer Sciences

Berkeley, CA 94720
United States

John Miller

University of California, Berkeley ( email )

310 Barrows Hall
Berkeley, CA 94720
United States

Ludwig Schmidt

University of Washington - Paul G. Allen School of Computer Science & Engineering ( email )

United States

Rebecca Wexler (Contact Author)

University of California, Berkeley, School of Law ( email )

691 Simon Hall
Berkeley, CA 94720
United States
510 664 5258 (Phone)

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
121
Abstract Views
870
Rank
499,418
PlumX Metrics