Stretching Human Laws to Apply to Machines: The Dangers of a 'Colorblind' Computer

32 Pages Posted: 1 Dec 2019 Last revised: 28 Oct 2020

See all articles by Zach Harned

Zach Harned

Stanford Law School

Hanna Wallach

Microsoft Research New York City

Date Written: November 15, 2019

Abstract

Automated decision making has become widespread in recent years, largely due to advances in machine learning. As a result of this trend, machine learning systems are increasingly used to make decisions in high-stakes domains, such as employment or university admissions. The weightiness of these decisions has prompted the realization that, like humans, machines must also comply with the law. But human decision-making processes are quite different from automated decision-making processes, which creates a mismatch between laws and the decision makers to which they are intended to apply. In turn, this mismatch can lead to counterproductive outcomes.

We take antidiscrimination laws in employment as a case study, with a particular focus on Title VII of the Civil Rights Act of 1964. A common strategy for mitigating bias in employment decisions is to “blind” human decision makers to the sensitive attributes of the applicants, such as race. The same strategy can also be used in an automated decision-making context by blinding the machine learning system to the race of the applicants (strategy 1). This strategy seems to comply with Title VII, but it does not necessarily mitigate bias because machine learning systems are adroit at using proxies for race if available. An alternative strategy is to not blind the system to race (strategy 2), thereby allowing it to use this information to mitigate bias. However, although preferable from a machine learning perspective, this strategy appears to violate Title VII.

We contend that this conflict between strategies 1 and 2 highlights a broader legal and policy challenge, namely, that laws designed to regulate human behavior may not be appropriate when stretched to apply to machines. Indeed, they may even be detrimental to the very people that they were designed to protect. Although scholars have explored legal arguments in an attempt to press strategy 2 into compliance with Title VII, we believe there lies a middle ground between strategies 1 and 2 that involves partial blinding—that is, blinding the system to race only during deployment and not during training (strategy 3). We present strategy 3 as a “Goldilocks” solution for discrimination in employment decisions (as well as other domains), because it allows for the mitigation of bias while still complying with Title VII. Ultimately, any solution to the general problem of stretching human laws to apply to machines must be sociotechnical in nature, drawing on work in both machine learning and the law. This is borne out in strategy 3, which involves innovative work in machine learning (viz. the development of disparate learning processes) and creative legal analysis (viz. analogizing strategy 3 to legally accepted auditing procedures).

Keywords: machine learning, Title VII, discrimination, anti-discrimination, artificial intelligence, civil rights, bias, automation, automated decision making

Suggested Citation

Harned, Zach and Wallach, Hanna, Stretching Human Laws to Apply to Machines: The Dangers of a 'Colorblind' Computer (November 15, 2019). Zach Harned & Hanna Wallach, Stretching Human Laws to Apply to Machines: The Dangers of a "Colorblind" Computer, 45 Fla. St. L. Rev. 617 (2020)., Available at SSRN: https://ssrn.com/abstract=3488060

Zach Harned (Contact Author)

Stanford Law School ( email )

559 Nathan Abbott Way
Stanford, CA 94305
United States

Hanna Wallach

Microsoft Research New York City ( email )

641 Avenue of Americas
New York, NY 10011
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
348
Abstract Views
2,338
Rank
172,486
PlumX Metrics