Less Discriminatory Algorithms

53 Pages Posted: 31 Oct 2023 Last revised: 11 Mar 2024

See all articles by Emily Black

Emily Black

Columbia University - Barnard College

John Logan Koepke

Upturn

Pauline Kim

Washington University in St. Louis - School of Law

Solon Barocas

Microsoft Research; Cornell University

Mingwei Hsu

Upturn

Date Written: October 2, 2023

Abstract

Entities that use algorithmic systems in traditional civil rights domains like housing, employment, and credit should have a duty to search for and implement less discriminatory algorithms (LDAs). Why? Work in computer science has established that, contrary to conventional wisdom, for a given prediction problem there are almost always multiple possible models with equivalent performance—a phenomenon termed model multiplicity. Critically for our purposes, different models of equivalent performance can produce different predictions for the same individual, and, in aggregate, exhibit different levels of impacts across demographic groups. As a result, when an algorithmic system displays a disparate impact, model multiplicity suggests that developers may be able to discover an alternative model that performs equally well, but has less discriminatory impact. Indeed, the promise of model multiplicity is that an equally accurate, but less discriminatory alternative algorithm almost always exists. But without dedicated exploration, it is unlikely developers will discover potential LDAs.

Model multiplicity has profound ramifications for the legal response to discriminatory algorithms. Under disparate impact doctrine, it makes little sense to say that a given algorithmic system used by an employer, creditor, or housing provider is either “justified” or “necessary” if an equally accurate model that exhibits less disparate effect is available and possible to discover with reasonable effort. Indeed, the overarching purpose of our civil rights laws is to remove precisely these arbitrary barriers to full participation in the nation’s economic life, particularly for marginalized racial groups. As a result, the law should place a duty of a reasonable search for LDAs on entities that develop and deploy predictive models in covered civil rights domains. The law should recognize this duty in at least two specific ways. First, under disparate impact doctrine, a defendant’s burden of justifying a model with discriminatory effects should be recognized to include showing that it made a reasonable search for LDAs before implementing the model. Second, new regulatory frameworks for the governance of algorithms should include a requirement that entities search for and implement LDAs as part of the model building process.

Keywords: AI, artificial intelligence, model multiplicity, discrimination, civil rights, algorithmic decision-making, disparate impact

Suggested Citation

Black, Emily and Koepke, John Logan and Kim, Pauline and Barocas, Solon and Hsu, Mingwei, Less Discriminatory Algorithms (October 2, 2023). Georgetown Law Journal, Vol. 113, No. 1, 2024, Washington University in St. Louis Legal Studies Research Paper Forthcoming, Available at SSRN: https://ssrn.com/abstract=4590481. or http://dx.doi.org/10.2139/ssrn.4590481

Emily Black

Columbia University - Barnard College ( email )

3009 Broadway
New York, NY 10027
United States

John Logan Koepke (Contact Author)

Upturn

1015 15th St, NW Suite 600
Washington, DC 20005
United States

Pauline Kim

Washington University in St. Louis - School of Law ( email )

Campus Box 1120
St. Louis, MO 63130
United States
314-935-8570 (Phone)
314-935-5356 (Fax)

Solon Barocas

Microsoft Research

300 Lafayette Street
New York, NY 10012
United States

Cornell University ( email )

Ithaca, NY 14853
United States

Mingwei Hsu

Upturn ( email )

1015 15th St, NW Suite 600
Washington, DC 20009
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
1,173
Abstract Views
6,647
Rank
35,523
PlumX Metrics