'Un'Fair Machine Learning Algorithms

49 Pages Posted: 4 Jul 2019 Last revised: 15 Apr 2021

See all articles by Runshan Fu

Runshan Fu

New York University (NYU) - Leonard N. Stern School of Business

Manmohan Aseri

University of Pittsburgh - Katz Graduate School of Business

Param Vir Singh

Carnegie Mellon University - David A. Tepper School of Business

Kannan Srinivasan

Carnegie Mellon University - David A. Tepper School of Business

Date Written: June 10, 2020

Abstract

Machine Learning algorithms are becoming widely deployed in real world decision-making. Ensuring fairness in algorithmic decision-making is a crucial policy issue. Current legislation ensures fairness by barring algorithm designers from using demographic information in their decision-making. As a result, the algorithms need to ensure equal treatment to be legally compliant. However, in many cases, ensuring equal treatment leads to disparate impact particularly when there are differences among groups based on demographic classes. In response, several ``fair” machine learning algorithms that require impact parity (e.g., equal opportunity) have recently been proposed to adjust for the societal inequalities; advocates propose changing the law to allow the use of protected class-specific decision rules. We show that these ``fair'' algorithms that require impact parity, while conceptually appealing, can make everyone worse off, including the very class they aim to protect. Compared to the current law, which requires treatment parity, these ``fair'' algorithms, which require impact parity, limit the benefits of a more accurate algorithm for a firm. As a result, profit maximizing firms could under-invest in learning, i.e., improving the accuracy of their machine learning algorithms. We show that the investment in learning decreases when misclassification is costly, which is exactly the case when greater accuracy is otherwise desired. Our paper highlights the importance of considering strategic behavior of stake holders when developing and evaluating ``fair'' machine learning algorithms. Overall, our results indicate that ``fair'' algorithms that require impact parity, if turned into law, may not be able to deliver some of the anticipated benefits.

Keywords: algorithmic bias, fair algorithms, machine learning bias, equality of opportunity, algorithms and bias, machine learning, bias

JEL Classification: M30, M38, 03, K0, K2, D0

Suggested Citation

Fu, Runshan and Aseri, Manmohan and Singh, Param Vir and Srinivasan, Kannan, 'Un'Fair Machine Learning Algorithms (June 10, 2020). Available at SSRN: https://ssrn.com/abstract=3408275 or http://dx.doi.org/10.2139/ssrn.3408275

Runshan Fu

New York University (NYU) - Leonard N. Stern School of Business ( email )

44 West 4th Street
Suite 9-160
New York, NY NY 10012
United States

Manmohan Aseri

University of Pittsburgh - Katz Graduate School of Business ( email )

Pittsburgh, PA 15260
United States

HOME PAGE: http://https://www.business.pitt.edu/people/manmohan-aseri

Param Vir Singh (Contact Author)

Carnegie Mellon University - David A. Tepper School of Business ( email )

5000 Forbes Avenue
Pittsburgh, PA 15213-3890
United States
412-268-3585 (Phone)

Kannan Srinivasan

Carnegie Mellon University - David A. Tepper School of Business ( email )

5000 Forbes Avenue
Pittsburgh, PA 15213-3890
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
1,046
Abstract Views
4,121
Rank
36,064
PlumX Metrics