Measuring Algorithmic Fairness

56 Pages Posted: 16 Jul 2019 Last revised: 10 Jun 2020

See all articles by Deborah Hellman

Deborah Hellman

University of Virginia School of Law

Date Written: July 11, 2019

Abstract

Algorithmic decision making is both increasingly common and increasingly controversial. Critics worry that algorithmic tools are not transparent, accountable or fair. Assessing the fairness of these tools has been especially fraught as it requires that we agree about what fairness is and what it entails. Unfortunately, we do not. The technological literature is now littered with a multitude of measures, each purporting to assess fairness along some dimension. Two types of measures stand out. According to one, algorithmic fairness requires that the score an algorithm produces should be equally accurate for members of legally protected groups, blacks and whites for example. According to the other, algorithmic fairness requires that the algorithm produces the same percentage of false positives or false negatives for each of the groups at issue. Unfortunately, there is often no way to achieve parity in both these dimensions. This fact has led to a pressing question. Which type of measure should we prioritize and why?

This Article makes three contributions to the debate about how best to measure algorithmic fairness: one conceptual, one normative, and one legal. Equal predictive accuracy ensures that a score means the same thing for each group at issue. As such, it relates to what one ought to believe about a scored individual. Because questions of fairness usually relate to action not belief, this measure is ill-suited as a measure of fairness. This is the Article’s conceptual contribution. Second, this Article argues that parity in the ratio of false positives to false negatives is a normatively significant measure. While a lack of parity in this dimension is not constitutive of unfairness, this measure provides important reasons to suspect that unfairness exists. This is the Article’s normative contribution. Interestingly, improving the accuracy of algorithms overall will lessen this unfairness. Unfortunately, a common assumption that antidiscrimination law prohibits the use of racial and other protected classifications in all contexts is inhibiting those who design algorithms from making them as fair and accurate as possible. This Article’s third contribution is to show that the law poses less of a barrier than many assume.

Keywords: discrimination, algorithms, fairness, equal protection

Suggested Citation

Hellman, Deborah, Measuring Algorithmic Fairness (July 11, 2019). Virginia Public Law and Legal Theory Research Paper No. 2019-39, Virginia Law and Economics Research Paper No. 2019-15, Virginia Law Review, Forthcoming, Available at SSRN: https://ssrn.com/abstract=3418528

Deborah Hellman (Contact Author)

University of Virginia School of Law ( email )

580 Massie Road
Charlottesville, VA 22903
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
2,328
Abstract Views
8,983
Rank
12,942
PlumX Metrics