Human-Algorithmic Bias: Source, Evolution, and Impact

54 Pages Posted: 29 Aug 2022 Last revised: 3 Dec 2024

See all articles by Xiyang Hu

Xiyang Hu

Carnegie Mellon University

Yan Huang

Carnegie Mellon University - David A. Tepper School of Business

Beibei Li

Carnegie Mellon University - H. John Heinz III School of Public Policy and Management

Tian Lu

Department of Information Systems, Arizona State University

Date Written: December 02, 2024

Abstract

Prior work on human-algorithmic bias has seen difficulty in empirically identifying the underlying mechanisms of bias, because in a typical "one-time" decision-making scenario, different mechanisms tend to generate the same patterns of observable decisions. In this study, leveraging a unique repeat decision-making setting in a high-stakes micro-lending context, we aim to uncover the underlying source, evolution dynamics, and associated impacts of bias. We first develop and estimate a structural econometric model of the decision dynamics to understand the source and evolution of potential bias in human evaluators in microloan granting. We find that both preference-based bias and belief-based bias are present in human evaluators' decisions and are in favor of female applicants. Through counterfactual simulations, we quantify the effects of the two types of bias on both fairness and profits. The results show that the elimination of either of the two biases improves the fairness in financial resource allocation, as well as the platform profits. Furthermore, to examine how human biases evolve when being inherited by machine learning (ML) algorithms, we then train a set of state-of-the-art ML algorithms for default risk prediction on both real-world datasets with human biases encoded within and counterfactual datasets with human biases partially or fully removed. By comparing the decision outcomes in different counterfactual settings, we find that even fairness-unaware ML algorithms can reduce bias present in human loan-granting decisions. Interestingly, while removing both types of human biases from the training data can further improve ML fairness, the fairness-enhancing effects vary significantly between new and repeat applicants. Based on our findings, we discuss how to reduce decision bias most effectively in a human-machine learning pipeline.

Keywords: Algorithmic Bias, Human Bias, Machine Learning, Structural Modeling, Micro-lending

Suggested Citation

Hu, Xiyang and Huang, Yan and Li, Beibei and Lu, Tian, Human-Algorithmic Bias: Source, Evolution, and Impact (December 02, 2024). Available at SSRN: https://ssrn.com/abstract=4195014 or http://dx.doi.org/10.2139/ssrn.4195014

Xiyang Hu

Carnegie Mellon University ( email )

Pittsburgh, PA
United States

HOME PAGE: http://www.andrew.cmu.edu/user/xiyanghu/

Yan Huang

Carnegie Mellon University - David A. Tepper School of Business ( email )

5000 Forbes Avenue
Pittsburgh, PA 15213-3890
United States

Beibei Li

Carnegie Mellon University - H. John Heinz III School of Public Policy and Management ( email )

Pittsburgh, PA 15213-3890
United States

Tian Lu (Contact Author)

Department of Information Systems, Arizona State University ( email )

Tempe, AZ 85287
United States

HOME PAGE: http://isearch.asu.edu/profile/tianlu1

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
362
Abstract Views
1,338
Rank
165,514
PlumX Metrics