Antidiscriminatory Algorithms

55 Pages Posted: 10 Jan 2019 Last revised: 8 Feb 2019

See all articles by Stephanie Bornstein

Stephanie Bornstein

Loyola Law School, Los Angeles; University of California, Berkeley - Berkeley Center on Comparative Equality & Anti-Discrimination Law

Date Written: December 20, 2018

Abstract

Can algorithms be used to advance equality goals in the workplace? A handful of legal scholars have raised concerns that the use of big data at work may lead to protected class discrimination that could fall outside the reach of current antidiscrimination law. Existing scholarship suggests that, because algorithms are “facially neutral,” they pose no problem of unequal treatment. As a result, algorithmic discrimination cannot be challenged using a disparate treatment theory of liability under Title VII of the Civil Rights Act of 1964 (Title VII). Instead, it presents a problem of unequal outcomes, subject to challenge using Title VII’s disparate impact framework only. Yet under current doctrine, scholars suggest, any disparate impact that results from an employer’s use of algorithmic decision-making could be excused as a justifiable business practice. Given this Catch-22, scholars propose either regulating the algorithms or reinterpreting the law.

This Article seeks to challenge current thinking on algorithmic discrimination. Both the “improve the algorithms” and the “improve the law” approaches focus solely on a clash between the anticlassification (formal equality) and antisubordination (substantive equality) goals of Title VII. But Title VII also serves an important antistereotyping goal: the principle that people should be treated not just equally across protected class groups but also individually, free from stereotypes associated with even one’s own group. This Article is the first to propose that some algorithmic discrimination may be challenged as disparate treatment using Title VII’s stereotype theory of liability. An antistereotyping approach offers guidance for improving hiring algorithms and the uses to which they are put, to ensure that algorithms are applied to counteract rather than reproduce bias in the workplace. Moreover, framing algorithmic discrimination as a problem of disparate treatment is essential for similar challenges outside of the employment context—for example, challenges to governmental use of algorithms in the criminal justice context raised under the Equal Protection Clause, which does not recognize disparate impact claims.

The current focus on ensuring that algorithms do not lead to new discrimination at work obscures that the technology was intended to do more: to improve upon human decision-making by suppressing biases to make the most efficient and least discriminatory decisions. Applying the existing doctrine of Title VII more robustly and incorporating a focus on its antistereotyping goal may help deliver on the promise of moving beyond mere nondiscrimination and toward actively antidiscriminatory algorithms.

Keywords: algorithms, big data, discrimination, stereotype, bias, employment, civil rights, Title VII, Civil Rights Act of 1964, Equal Protection

JEL Classification: J71, K31

Suggested Citation

Bornstein, Stephanie, Antidiscriminatory Algorithms (December 20, 2018). Alabama Law Review, Vol. 70, No. 2, p. 519, 2018, University of Florida Levin College of Law Research Paper No. 19-6, Available at SSRN: https://ssrn.com/abstract=3307893

Stephanie Bornstein (Contact Author)

Loyola Law School, Los Angeles ( email )

919 Albany Street
Los Angeles, CA 90015-1211
United States

University of California, Berkeley - Berkeley Center on Comparative Equality & Anti-Discrimination Law ( email )

Berkeley, CA 94720-7200
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
516
Abstract Views
2,855
Rank
106,461
PlumX Metrics