Understanding Algorithm Aversion: When and Why Do People Abandon AI After Seeing It Err?
65 Pages Posted: 23 Dec 2022 Last revised: 6 Nov 2023
Date Written: December 9, 2022
Technological advancements have provided people the option to choose services provided by “artificial Intelligence” (AI) or human agents. Previous research (Dietvorst et al. 2015) concludes that people abandon AI after seeing it err (i.e., the “algorithm aversion” phenomenon). However, this conclusion is primarily based on studies where AI and human agents make significant errors. In a series of pre-registered experiments examining preference for an AI or a human forecaster, we test the boundaries of this phenomenon by varying the AI’s and the human’s performance. We find that when participants are informed of either the AI’s or the human forecaster's previous error, algorithm aversion disappears as the AI’s (human forecaster's) error in the feedback becomes smaller (larger). We further establish a cognitive mechanism that reconciles some seminal research (e.g., Dietvorst et al. 2015, Castelo et al. 2019, Logg et al. 2019): whether there is algorithm aversion (or appreciation) ultimately depends on people’s perceived relative competence of the AI and the human forecaster. The performance feedback serves as an accuracy cue that influences people’s perceptions, and algorithm aversion happens when the feedback induces belief-updating to the extent that one’s perceived AI’s error is larger than that of the human forecaster. Further analysis on the belief-updating dynamics suggests people make non-discriminatory inferences for the same feedback on AI and the human forecaster. Our findings are robust in both objective and subjective task contexts.
Keywords: Consumer Behavior, Artificial Intelligence, Algorithm Aversion and Appreciation, Belief-updating, Forecasting
Suggested Citation: Suggested Citation