Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users' Information Processing

112 Pages Posted: 25 Jun 2021 Last revised: 30 Nov 2022

See all articles by Kevin Bauer

Kevin Bauer

University of Mannheim; Leibniz Institute for Financial Research SAFE

Moritz von Zahn

Goethe University Frankfurt

Oliver Hinz

Goethe University Frankfurt - Faculty of Economics and Business Administration

Date Written: November 29, 2021

Abstract

Due to a growing number of initiatives and regulations, predictions of modern Artificial Intelligence (AI) systems increasingly come with explanations about why they behave the way they do. In this paper, we explore the impact of feature-based explanations on users' information processing. We designed two complementary empirical studies where participants either made incentivized decisions on their own, with the aid of opaque predictions, or with explained predictions. In Study 1, laypeople engaged in the deliberately abstract investment game task. In Study 2, experts from the real-estate industry estimated listing prices for real German apartments. Our results indicate that the provision of feature-based explanations paves the way for AI systems to reshape users' sense-making of information and understanding of the world around them. Specifically, explanations change users' situational weighting of available information and evoke mental model adjustments. Crucially, mental model adjustments are subject to the confirmation bias so that misconceptions can persist and even accumulate, possibly leading to suboptimal or biased decisions. Additionally, mental model adjustments create spillover effects that alter user behavior in related yet disparate domains. Overall, this paper provides important insights into potential downstream consequences of the broad employment of modern explainable AI methods. In particular, side effects of mental model adjustments present a potentialrisk of manipulating user behavior and promoting discriminatory inclinations. Our findings may inform the refinement of current efforts of companies building AI systems and regulators that aim to mitigate problems associated with the black-box nature of many modern AI systems.

Keywords: XAI, Explainable machine learning, Information processing, Belief up-dating, Algorithmic transparency

Suggested Citation

Bauer, Kevin and von Zahn, Moritz and Hinz, Oliver, Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users' Information Processing (November 29, 2021). SAFE Working Paper No. 315, Available at SSRN: https://ssrn.com/abstract=3872711 or http://dx.doi.org/10.2139/ssrn.3872711

Kevin Bauer (Contact Author)

University of Mannheim ( email )

L15
1-6
Mannheim, 68131
Germany

HOME PAGE: http://https://www.bwl.uni-mannheim.de/bauer/

Leibniz Institute for Financial Research SAFE ( email )

(http://www.safe-frankfurt.de)
Theodor-W.-Adorno-Platz 3
Frankfurt am Main, 60323
Germany

Moritz Von Zahn

Goethe University Frankfurt ( email )

Grüneburgplatz 1
Frankfurt am Main, 60323
Germany

Oliver Hinz

Goethe University Frankfurt - Faculty of Economics and Business Administration ( email )

Mertonstrasse 17-25
Frankfurt am Main, D-60325
Germany

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
519
Abstract Views
2,275
Rank
115,680
PlumX Metrics