Regulating Explainable AI (XAI) May Harm Consumers
57 Pages Posted: 17 Oct 2023 Last revised: 11 Apr 2024
Date Written: September 30, 2020
Abstract
The most recent AI algorithms lack interpretability. eXplainable AI (XAI) aims to ad- dress this by explaining AI decisions to customers. Although it is commonly believed that the requirement of fully transparent XAI enhances consumer surplus, our paper challenges this view. We present a game-theoretic model where a policymaker maximizes consumer surplus in a duopoly market with heterogeneous customer preferences. Our model integrates AI accuracy, explanation depth, and method. We find that partial explanations can be an equilibrium in an unregulated setting. Furthermore, we identify scenarios where customers and firms’ desires for full explanation are misaligned. In these cases, regulating full explanations may not be socially optimal and could worsen the outcomes for firms and consumers. Flexible XAI policies outperform both full transparency and unregulated extremes.
Keywords: Machine Learning, Explainable AI, Economics of AI, Regulation, Fairness
Suggested Citation: Suggested Citation