What is the Best Explainable Artificial Intelligence for Shaping User Behavior? Evidence from Two Randomized Field Experiments
35 Pages Posted: 17 Jan 2024
Date Written: December 28, 2023
Abstract
This research examines whether and how various forms of explainable AI (XAI), which aims to rectify the lack of transparency in AI (artificial intelligence) algorithms, shape user behavior. Various XAI methods have been developed to improve user understanding of AI systems, yet there is no consensus on the best approach, as their effectiveness varies based on user-specific requirements. Recent XAI research, mainly focused on the internal workings of AI models, may not adequately cater to the practical needs of everyday users. To fill this gap, two randomized field experiments are conducted to investigate three well-established XAI approaches: 1) feature importance, which explains the “how” aspect, 2) feature attribution, which explains the “why” aspect, and 3) counterfactual explanations, which explain the “what if” aspect. Study 1 is focused on whether XAI increases user adherence to activities suggested by a mobile health (mHealth) management platform. Study 2 is focused on whether XAI increases engagement in an online peer-to-peer (P2P) lending platform. Self-regulated learning theory is the basis for expectations that counterfactual explanations increase self-regulation regarding better planning and positive outcome expectations. Study 1 confirms that counterfactual explanations significantly improve planning behaviors, leading to a 10.1% increase in workout duration and a 5% increase in health activity records compared to the control group. In addition, counterfactual explanations are more effective when their features align with user preferences. However, feature attribution proves to be more effective for managing dietary behavior, especially when AI predicts positive outcomes (weight loss), whereas counterfactual explanations are more effective when AI predicts negative outcomes (weight gain). Study 2 corroborates that counterfactual explanations lead to a 17.0% increase in financial investments and in selecting 21.7% more diverse products, compared to the control group. The effects are most positive for risk-averse, non-tech-savvy, and inexperienced investors. The findings show that XAI interventions make AI applications more understandable and trustworthy.
Keywords: counterfactual explanation, explainable artificial intelligence, fintech, mobile health, self-regulated learning
JEL Classification: M21, M15
Suggested Citation: Suggested Citation