Equal Protection Under Algorithms: A New Statistical and Legal Framework

69 Pages Posted: 11 Oct 2019 Last revised: 31 Dec 2019

See all articles by Crystal Yang

Crystal Yang

Harvard Law School

Will Dobbie

Harvard University - Harvard Kennedy School (HKS)

Date Written: October 1, 2019

Abstract

In this paper, we provide a new statistical and legal framework to understand the legality and fairness of predictive algorithms under the Equal Protection Clause. We begin by reviewing the main legal concerns regarding the use of protected characteristics such as race and the correlates of protected characteristics such as criminal history. The use of race and non-race correlates in predictive algorithms generates direct and proxy effects of race, respectively, that can lead to racial disparities that many view as unwarranted and discriminatory. These effects have led to the mainstream legal consensus that the use of race and non-race correlates in predictive algorithms is both problematic and potentially unconstitutional under the Equal Protection Clause. This mainstream position is also reflected in practice, with all commonly-used predictive algorithms excluding race and many excluding non-race correlates such as employment and education.

In the second part of the paper, we challenge the mainstream legal position that the use of a protected characteristic always violates the Equal Protection Clause. We first develop a statistical framework that formalizes exactly how the direct and proxy effects of race can lead to algorithmic predictions that disadvantage minorities relative to non-minorities. While an overly formalistic legal solution requires exclusion of race and all potential non-race correlates, we show that this type of algorithm is unlikely to work in practice because nearly all algorithmic inputs are correlated with race. We then show that there are two simple statistical solutions that can eliminate the direct and proxy effects of race, and which are implementable even when all inputs are correlated with race. We argue that our proposed algorithms uphold the principles of the Equal Protection doctrine because they ensure that individuals are not treated differently on the basis of membership in a protected class, in stark contrast to commonly-used algorithms that unfairly disadvantage minorities despite the exclusion of race.

We conclude by empirically testing our proposed algorithms in the context of the New York City pretrial system. We show that nearly all commonly-used algorithms violate certain principles underlying the Equal Protection Clause by including variables that are correlated with race, generating substantial proxy effects that unfairly disadvantage blacks relative to whites. Both of our proposed algorithms substantially reduce the number of black defendants detained compared to these commonly-used algorithms by eliminating these proxy effects. These findings suggest a fundamental rethinking of the Equal Protection doctrine as it applies to predictive algorithms and the folly of relying on commonly-used algorithms.

Keywords: algorithms, discrimination, risk assessment, equal protection

JEL Classification: C10, C55, J15, K14, K40

Suggested Citation

Yang, Crystal and Dobbie, Will, Equal Protection Under Algorithms: A New Statistical and Legal Framework (October 1, 2019). Available at SSRN: https://ssrn.com/abstract=3462379 or http://dx.doi.org/10.2139/ssrn.3462379

Crystal Yang (Contact Author)

Harvard Law School ( email )

1575 Massachusetts
Hauser 406
Cambridge, MA 02138
United States

Will Dobbie

Harvard University - Harvard Kennedy School (HKS) ( email )

79 John F. Kennedy Street
Cambridge, MA 02138
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
452
Abstract Views
2,709
Rank
111,352
PlumX Metrics