Incomprehensible Discrimination

15 Pages Posted: 12 Apr 2017 Last revised: 25 May 2017

See all articles by James Grimmelmann

James Grimmelmann

Cornell Law School; Cornell Tech

Daniel Westreich

University of North Carolina (UNC) at Chapel Hill - Department of Epidemiology

Date Written: April 9, 2017


Plaintiff Ruminant Defense Lague alleged that the defendant Zootopia Police Department's new algorithmic hiring process unfairly discriminates against herbivores in violation of Title VII. The ZPD responded, and the Court of Appeals agreed, that because its algorithmic model does not explicitly consider species, it does not constitute disparate treatment, and that since the model is preedictive of job performance, any disparate impact on herbivores is justified as a business necessity. This case, in other words, squarely presents the problem of “proxies for proscribed criteria" identified by Solon Barocas and Andrew Selbst in the article, Big Data's Disparate Impact, 104 Calif. L. Rev. 101 (2016).

Held: Where the output of an algorithmic model is correlated both with job performance and with membership in a protected class, the employer bears the burden of demonstrating which of these correlations, if either, is causal. Pp. 170–77.

(a) It can be a permissible employment practice to use a properly validated predictive algorithmic model in personnel decisions, even if the model disproporionately favors some groups of applicants and employees over others. Pp. 170-71.

(b) Where such a model has a disparate impact on a protected group, the employer may rely on a defense of business necessity only if it shows that the factors the model relies on are not just correlated with job performance but causally connected to it. Here, it might be the case that the ZPD's model works not because it identifies the characteristics relevant to successful police work, but only because it identifies the herbivores in an applicant pool where the relevant characteristics are unequally distributed. Pp. 171-72.

(c) This allocation of the burden of proof will encourage employers to better understand the algorithmic models they use, and to share that knowledge with the public. Pp. 173-74.

(d) Other bodies of law, including the Fair Credit Reporting Act, already require explanations for why models behave the way they do. Pp. 174-76.

(e) This modification to the burden-shifting framework of disparate impact borrows from disparate treatment doctrine. Title VII does not permit an employer to do indirectly what it could not do directly. P. 176.

Our holding today is simple. Incomprehensible discrimination will not stand.

Keywords: Big Data, Data Mining, Algorithmic Discrimination, Accountable Algorithms, Transparency, Employment Discrimination

JEL Classification: K00

Suggested Citation

Grimmelmann, James and Grimmelmann, James and Westreich, Daniel, Incomprehensible Discrimination (April 9, 2017). 7 California Law Review Online 164 (2017), Cornell Legal Studies Research Paper No. 17-28, Available at SSRN:

James Grimmelmann (Contact Author)

Cornell Law School ( email )

Myron Taylor Hall
Cornell University
Ithaca, NY 14853-4901
United States

Cornell Tech ( email )

111 8th Avenue #302
New York, NY 10011
United States

Daniel Westreich

University of North Carolina (UNC) at Chapel Hill - Department of Epidemiology ( email )

Chapel Hill, NC 27599
United States


Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics