Understanding Algorithm Aversion: When Do People Abandon AI After Seeing It Err?
54 Pages Posted: 23 Dec 2022 Last revised: 17 May 2023
Date Written: December 9, 2022
Abstract
Technological advancements have provided consumers the option to choose services provided by “artificial Intelligence” (AI) or human agents. Previous research (Dietvorst et al. 2015) has shown that consumers display “algorithm aversion” after seeing AI err. We further explore this phenomenon by testing the boundaries of this effect. In a series of experiments examining preference for an AI or a human forecaster in statistical prediction tasks, we find that (1) when participants are informed of either the AI’s or the human forecaster's previous error, algorithm aversion disappears as the AI’s (human forecaster's) error in the feedback becomes smaller (larger); (2) when the feedback suggests both the human forecaster and the AI have the same previous error, participants do not abandon the AI. These results are a consequence of participants quite rationally updating the relative competence of the AI and the human forecaster based on the accuracy cue. Furthermore, we find that algorithm aversion does not happen if an accuracy cue suggests the AI performs worse than one’s expectation, but rather when the feedback indicates the AI’s error is larger than one’s expected human forecaster’s error. This suggests that the perceived relative competence of the AI and the human forecaster ultimately determines participants’ preference. Overall, our findings suggest that people tolerate AI’s imperfection to a degree greater than previously thought, but firms should strive to improve the accuracy of their AI as this affects algorithm acceptance.
Keywords: Consumer Behavior, Artificial Intelligence, Algorithm Aversion and Appreciation, Belief-updating, Forecasting
Suggested Citation: Suggested Citation