Algorithmic Fairness, Algorithmic Discrimination

62 Pages Posted: 4 Feb 2020 Last revised: 6 Apr 2020

See all articles by Thomas Nachbar

Thomas Nachbar

University of Virginia School of Law

Date Written: February 2, 2020


There has been an explosion of concern about use of computers to make decisions – from hiring to lending approvals to setting prison terms – affecting humans. Many have pointed out that using computer programs to inform decisions may result in the propagation of biases or otherwise lead to undesirable outcomes. Many have called for increased transparency and others have called for algorithms to be tuned to produce more racially balanced outcomes. The problem is likely to draw increasing attention as computers make increasingly important and sophisticated decisions in our daily lives.

Drawing on both the computer science and legal literature on algorithmic fairness, this paper makes four major contributions to the debate: First, it provides a legal response to arguments for incorporating “fairness” in algorithmic decisionmakers by demonstrating that legal rules generally apply in the form of side constraints, not fairness functions that can be optimized. Second, by looking at the problem through the lens of discrimination law, the paper recognizes that the problems posed by computational decisionmakers closely resemble the historical, institutional discrimination that discrimination law has evolved to control, a response to the claim that this problem is truly novel because it involves computerized decisionmaking. Third, the paper responds to calls for transparency in computational decisionmaking by demonstrating how transparency is unnecessary to providing accountability and that discrimination law itself provides a model for how to deal with cases of unfair algorithmic discrimination, with or without transparency. Fourth, the paper addresses a problem that has divided the literature on the topic: how to correct for discriminatory results produced by algorithms. Rather than seeing the problem as a binary one, I offer a third way, one that disaggregates the process of correcting algorithmic decisionmakers into two separate decisions: a decision to reject an old process and a separate decision to adopt a new one. Those two decisions are subject to different legal requirements, providing added flexibility to firms and agencies seeking to avoid the worst kinds of discriminatory outcomes.

In the end, current discrimination law provides most of the answers for the wide variety of “fairness” related claims likely to arise in the context of computational decisionmakers, regardless of the specific technology underlying them.

Keywords: constitutional law, equal protection, artificial intelligence, machine learning, algorithms, discrimination

Suggested Citation

Nachbar, Thomas, Algorithmic Fairness, Algorithmic Discrimination (February 2, 2020). Florida State University Law Review, Forthcoming, Virginia Public Law and Legal Theory Research Paper No. 2020-11, Available at SSRN:

Thomas Nachbar (Contact Author)

University of Virginia School of Law ( email )

580 Massie Road
Charlottesville, VA 22903
United States
434-924-7588 (Phone)
434-924-7536 (Fax)


Here is the Coronavirus
related research on SSRN

Paper statistics

Abstract Views
PlumX Metrics