Standards of Fairness for Disparate Impact Assessment of Big Data Algorithms

48 Cumberland Law Review 102 (2017)

79 Pages Posted: 29 Apr 2018

Date Written: April 2, 2018


This paper describes and assesses several competing group and individual statistical standards of fairness, including the mathematical conflict between predictive parity and equal error rates that requires organizations to choose which measure to satisfy. The choice between statistical concepts of group fairness and individual fairness recalls the dispute between those who think the anti-discrimination laws aim at group disadvantaging practices and those who think they target arbitrary misclassification of individuals. Those analysts who embrace statistical measures of group fairness such as statistical parity and equal group error rates aim at reducing the subordination of disadvantaged groups; those data scientists who favor measures of individual fairness are aiming to avoid the arbitrary misclassification of individuals. Group fairness calls for analytics to aim for statistical parity or equal group error rates for protected groups, while individual fairness says analytics should aim only at accurate predictions. The goal of individual fairness is satisfied with equal accuracy in classification; while the goal of group fairness allows for some sacrifice of equal accuracy to protect vulnerable groups. To bring this normative dimension into sharper focus, this paper explores the extent to which the choice between the statistical concepts of individual and group fairness is related to a fundamental difference toward the principle that people are entitled to reap the rewards of their own talents and skills. The idea that similar people ought to be treated similarly and its image in the statistical concept of equal predictive accuracy gain strength from the normative principle that rewards ought to be distributed according to talents and skills. This paper will address this normative dimension through contrasting Robert Nozick and John Rawls’ approaches to rewarding talent. It will argue in favor of carving out an exception to the principle of basing rewards on merit to allow the use of group fairness measures. The paper also explores the extent to which current relevant Supreme Court decisions would permit designing or modifying algorithms to move toward statistical parity or equalized group error rates.

Suggested Citation

MacCarthy, Mark, Standards of Fairness for Disparate Impact Assessment of Big Data Algorithms (April 2, 2018). 48 Cumberland Law Review 102 (2017). Available at SSRN: or

Mark MacCarthy (Contact Author)

Georgetown University ( email )

3520 Prospect St NW
Suite 311
Washington, DC 20057
United States

Register to save articles to
your library


Paper statistics

Abstract Views
PlumX Metrics