Proxy Discrimination in the Age of Artificial Intelligence and Big Data

62 Pages Posted:

See all articles by Daniel Schwarcz

Daniel Schwarcz

University of Minnesota Law School

Anya Prince

University of Iowa College of Law

Date Written: March 6, 2019

Abstract

Big data and Artificial Intelligence (“AI”) are revolutionizing the ways in which firms, governments, and employers classify individuals. Surprisingly, however, one of the most important threats to antidiscrimination regimes posed by this revolution is largely unexplored or misunderstood in the extant literature. This is the risk that modern algorithms will result in “proxy discrimination.” Proxy discrimination is a particularly pernicious subset of disparate impact. Like all forms of disparate impact, it involves a facially-neutral practice that disproportionately harms members of a protected class. But a practice producing a disparate impact only amounts to proxy discrimination when the usefulness to the discriminator of the facially-neutral practice derives, at least in part, from the very fact that it produces a disparate impact. Historically, this occurred when a firm intentionally sought to discriminate against members of a protected class by relying on a proxy for class membership, such as zip code. However, proxy discrimination need not be intentional when membership in a protected class is predictive of a discriminator’s facially-neutral goal, making discrimination “rational.” In these cases, firms may unwittingly proxy discriminate, knowing only that a facially-neutral practice produces desirable outcomes. This Article argues that AI and big data are game changers when it comes to this risk of unintentional, but “rational,” proxy discrimination. AIs armed with big data are inherently structured to engage in proxy discrimination whenever they are deprived of information about membership in a legally-suspect class that is genuinely predictive of a legitimate objective. Simply denying AIs access to the most intuitive proxies for predictive but suspect characteristics does little to thwart this process; instead it simply causes AIs to locate less intuitive proxies. For these reasons, as AIs become even smarter and big data becomes even bigger, proxy discrimination will represent an increasingly fundamental challenge to anti-discrimination regimes that seek to limit “rational discrimination.” This Article offers a menu of potential strategies for combatting this risk of proxy discrimination by AI, including prohibiting the use of non-approved types of discrimination, mandating the collection and disclosure of data about impacted individuals’ membership in legally protected classes, and requiring firms to employ statistical models that isolate only the predictive power of non-suspect variables.

Keywords: Proxy Discrimination, Artificial Intelligence, Insurance, Big Data, GINA

Suggested Citation

Schwarcz, Daniel B. and Prince, Anya, Proxy Discrimination in the Age of Artificial Intelligence and Big Data (March 6, 2019). Iowa Law Review, Forthcoming. Available at SSRN: https://ssrn.com/abstract=

Daniel B. Schwarcz (Contact Author)

University of Minnesota Law School ( email )

229 19th Avenue South
Minneapolis, MN 55455
United States

HOME PAGE: http://www.law.umn.edu/profiles/daniel-schwarcz

Anya Prince

University of Iowa College of Law ( email )

Melrose and Byington
Iowa City, IA 52242
United States

Register to save articles to
your library

Register

Paper statistics

Downloads
101
Abstract Views
830
PlumX Metrics