Algorithm Adoption and Explanations: An Experimental Study on Self and Other Perspectives
34 Pages Posted: 19 Nov 2024 Last revised: 25 Nov 2024
Date Written: November 10, 2024
Abstract
People are reluctant to follow machine-learning recommendation systems. To address this, research suggests providing explanations about the underlying algorithm to increase adoption. However, the degree to which adoption depends on the party impacted by a user’s decision (the user vs. a third party) and whether explanations boost adoption in both settings is not well understood. These questions are particularly relevant in contexts such as medical, judicial, and financial decisions, where a third party bears the main impact of a user’s decision. We examine these questions using controlled incentivized experiments. We design a prediction task where participants observe fictitious objects and must predict their color with the aid of algorithmic recommendations. We manipulate whether (i) a participant receives an explanation about the algorithm and (ii) the impacted party is the participant (Self treatment) or a matched individual (Other treatment). Our findings reveal that, in the absence of explanations, algorithmic adoption is similar regardless of the impacted party. We also find that explanations significantly increase adoption in Self, where they help attenuate negative responses to algorithm errors over time. However, this pattern is not observed in Other, where explanations have no discernible effect—leading to significantly lower adoption than in Self in the last rounds. These results suggest that further strategies—beyond explanations—need to be explored to boost adoption in settings where the impact is predominantly felt by a third party.
Keywords: Behavioral Operations, Self-Other Differences, Explainable Artificial Intelligence, Human-Computer Interaction
Suggested Citation: Suggested Citation