What is the Best Explainable Artificial Intelligence for Shaping User Behavior? Evidence from Two Randomized Field Experiments

35 Pages Posted: 17 Jan 2024

See all articles by Donggyu (David) Min

Donggyu (David) Min

Korea Advanced Institute of Science and Technology (KAIST)

Sunghun Chung

George Washington University - School of Business

Chul Ho Lee

College of Business, Korea Advanced Institute of Science and Technology (KAIST)

Wenjing Duan

George Washington University - School of Business

Date Written: December 28, 2023

Abstract

This research examines whether and how various forms of explainable AI (XAI), which aims to rectify the lack of transparency in AI (artificial intelligence) algorithms, shape user behavior. Various XAI methods have been developed to improve user understanding of AI systems, yet there is no consensus on the best approach, as their effectiveness varies based on user-specific requirements. Recent XAI research, mainly focused on the internal workings of AI models, may not adequately cater to the practical needs of everyday users. To fill this gap, two randomized field experiments are conducted to investigate three well-established XAI approaches: 1) feature importance, which explains the “how” aspect, 2) feature attribution, which explains the “why” aspect, and 3) counterfactual explanations, which explain the “what if” aspect. Study 1 is focused on whether XAI increases user adherence to activities suggested by a mobile health (mHealth) management platform. Study 2 is focused on whether XAI increases engagement in an online peer-to-peer (P2P) lending platform. Self-regulated learning theory is the basis for expectations that counterfactual explanations increase self-regulation regarding better planning and positive outcome expectations. Study 1 confirms that counterfactual explanations significantly improve planning behaviors, leading to a 10.1% increase in workout duration and a 5% increase in health activity records compared to the control group. In addition, counterfactual explanations are more effective when their features align with user preferences. However, feature attribution proves to be more effective for managing dietary behavior, especially when AI predicts positive outcomes (weight loss), whereas counterfactual explanations are more effective when AI predicts negative outcomes (weight gain). Study 2 corroborates that counterfactual explanations lead to a 17.0% increase in financial investments and in selecting 21.7% more diverse products, compared to the control group. The effects are most positive for risk-averse, non-tech-savvy, and inexperienced investors. The findings show that XAI interventions make AI applications more understandable and trustworthy.

Keywords: counterfactual explanation, explainable artificial intelligence, fintech, mobile health, self-regulated learning

JEL Classification: M21, M15

Suggested Citation

Min, Donggyu (David) and Chung, Sunghun and Lee, Chul Ho and Duan, Wenjing, What is the Best Explainable Artificial Intelligence for Shaping User Behavior? Evidence from Two Randomized Field Experiments (December 28, 2023). Available at SSRN: https://ssrn.com/abstract=4688258 or http://dx.doi.org/10.2139/ssrn.4688258

Donggyu (David) Min

Korea Advanced Institute of Science and Technology (KAIST) ( email )

373-1 Kusong-dong
Yuson-gu
Taejon 305-701, 130-722
Korea, Republic of (South Korea)

Sunghun Chung (Contact Author)

George Washington University - School of Business ( email )

Washington, DC 20052
United States

Chul Ho Lee

College of Business, Korea Advanced Institute of Science and Technology (KAIST) ( email )

291 Daehak-ro, Yuseong-gu
Daejeon, 34141
Korea, Republic of (South Korea)

Wenjing Duan

George Washington University - School of Business ( email )

2121 I Street NW
Washington, DC 20052
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
151
Abstract Views
495
Rank
393,861
PlumX Metrics