People Prefer Moral Discretion to Procedurally Fair Algorithms: Algorithm Aversion Beyond Intransparency
30 Pages Posted: 1 Jun 2021 Last revised: 25 Apr 2022
Date Written: May 31, 2021
We explore aversion to the use of algorithms in moral decision-making. So far, this aversion has been explained mainly by the fear of opaque decisions that are potentially biased. Using incentivized experiments, we study which role the desire for human discretion in moral decision-making plays. This seems justified in light of evidence suggesting that people might not doubt the quality of algorithmic decisions, but still reject them. In our first study, we found that people prefer humans with decision-making discretion to algorithms that rigidly apply exogenously given human-created fairness principles to specific cases. In the second study, we found that people do not prefer humans to algorithms because they appreciate flesh-and-blood decision-makers per se, but because they appreciate humans’ freedom to transcend fairness principles at will. Our results contribute to a deeper understanding of algorithm aversion. They indicate that emphasizing the transparency of algorithms that clearly follow fairness principles might not be the only element for fostering societal algorithm acceptance and suggest reconsidering certain features of the decision-making process.
Keywords: algorithm aversion, artificial intelligence, moral discretion, behavioral ethics
JEL Classification: O33, D91, C91
Suggested Citation: Suggested Citation