Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action

58 Pages Posted: 30 Jan 2022 Last revised: 5 Dec 2022

See all articles by Pauline Kim

Pauline Kim

Washington University in St. Louis - School of Law

Date Written: January 26, 2022


The growing use of predictive algorithms is increasing concerns that they may discriminate, but mitigating or removing bias requires designers to be aware of protected characteristics and take them into account. If they do so, however, will those efforts be considered a form of discrimination? Put concretely, if model-builders take race into account to prevent racial bias against Black people, have they then engaged in discrimination against white people? Some scholars assume so and seek to justify those practices under existing affirmative action doctrine. By invoking the Court’s affirmative action jurisprudence, however, they implicitly assume that these practices entail discrimination against white people and require special justification. This Article argues that these scholars have started the analysis in the wrong place. Rather than assuming, we should first ask whether particular race-aware strategies constitute discrimination at all. Despite rhetoric about colorblindness, some forms of race consciousness are widely accepted as lawful. Because creating an algorithm is a complex, multi-step process involving many choices, tradeoffs and judgment calls, there are many different ways a designer might take race into account, and not all of these strategies entail discrimination against white people. Only if a particular strategy is found to discriminate is it necessary to scrutinize it under affirmative action doctrine. Framing the analysis in this way matters, because affirmative action doctrine imposes a heavy legal burden of justification. In addition, treating all race-aware algorithms as a form of discrimination reinforces the false notion that leveling the playing field for disadvantaged groups somehow disrupts the entitlements of a previously advantaged group. It also mistakenly suggests that prior to considering race, algorithms are neutral processes that uncover some objective truth about merit or desert, rather than properly understanding them as human constructs that reflect the choices of their creators.

Keywords: discrimination, affirmative action, fairness, algorithms, artificial intelligence, automated decision-making, employment

Suggested Citation

Kim, Pauline, Race-Aware Algorithms: Fairness, Nondiscrimination and Affirmative Action (January 26, 2022). California Law Review, Vol. 110, pp. 1539-1596, Washington University in St. Louis Legal Studies Research Paper No. 22-01-02, Available at SSRN: https://ssrn.com/abstract=4018414

Pauline Kim (Contact Author)

Washington University in St. Louis - School of Law ( email )

Campus Box 1120
St. Louis, MO 63130
United States
314-935-8570 (Phone)
314-935-5356 (Fax)

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics