Tailoring Explainable Artificial Intelligence: User Preferences and Profitability Implications for Firms
41 Pages Posted: 27 Dec 2022 Last revised: 4 Jan 2023
Date Written: December 14, 2022
Abstract
We conduct a lab-in-the-field experiment at a large institutional lender in Asia to study the preferences of real AI users (loan officers) with respect to the tailoring of explainable artificial intelligence (XAI). Our experiment utilizes a choice-based conjoint (CBC) survey in which we vary the XAI approach, the type of underlying AI model (developed by the lenders' data scientists with real data on the exact loans that our experimental subjects issue), the number of features in the visualization, the applicant aggregation level, and the lending outcome. We analyze the survey data using Hierarchical Bayes method, generating part-worth utilities for each AI user and at the sample level across every attribute combination. We observe that (i) the XAI approach is the most important factor, more than any other attribute, (ii) AI users prefer certain combinations of XAI approaches and models to be used together, (ii) user prefer nine or six features in the XAI visualizations, (iv) users do not have preferences for the applicant aggregation level, (v) their preferences do not change across positive or negative lending outcomes, and (vi) user preferences do not match the profitability rankings of the AI models. We then present a cost of misclassification profitability analysis across several simulated levels of AI user algorithm aversion. We show how firms can strategically combine models and XAI approaches to drive profitability; integrating the preferences of the AI users who are to incorporate AI models into their decision-making, with those of the data scientists who build such models.
Keywords: human-AI collaboration, artificial intelligence, explainability, XAI, choice-based conjoint survey, Hierarchical Bayes analysis, cost of misclassification, algorithm aversion
Suggested Citation: Suggested Citation