Tailoring Explainable Artificial Intelligence: User Preferences and Profitability Implications for Firms

41 Pages Posted: 27 Dec 2022 Last revised: 4 Jan 2023

See all articles by Stephanie Kelley

Stephanie Kelley

Ivey Business School, Western University

Anton Ovchinnikov

Smith School of Business - Queen's University; INSEAD - Decision Sciences

Gabriel Ramolete

Aboitiz Data Innovation

Keerthana Sureshbabu

Aboitiz Data Innovation

Adrienne Heinrich

UnionBank of the Philippines; Aboitiz Data Innovation

Date Written: December 14, 2022

Abstract

We conduct a lab-in-the-field experiment at a large institutional lender in Asia to study the preferences of real AI users (loan officers) with respect to the tailoring of explainable artificial intelligence (XAI). Our experiment utilizes a choice-based conjoint (CBC) survey in which we vary the XAI approach, the type of underlying AI model (developed by the lenders' data scientists with real data on the exact loans that our experimental subjects issue), the number of features in the visualization, the applicant aggregation level, and the lending outcome. We analyze the survey data using Hierarchical Bayes method, generating part-worth utilities for each AI user and at the sample level across every attribute combination. We observe that (i) the XAI approach is the most important factor, more than any other attribute, (ii) AI users prefer certain combinations of XAI approaches and models to be used together, (ii) user prefer nine or six features in the XAI visualizations, (iv) users do not have preferences for the applicant aggregation level, (v) their preferences do not change across positive or negative lending outcomes, and (vi) user preferences do not match the profitability rankings of the AI models. We then present a cost of misclassification profitability analysis across several simulated levels of AI user algorithm aversion. We show how firms can strategically combine models and XAI approaches to drive profitability; integrating the preferences of the AI users who are to incorporate AI models into their decision-making, with those of the data scientists who build such models.

Keywords: human-AI collaboration, artificial intelligence, explainability, XAI, choice-based conjoint survey, Hierarchical Bayes analysis, cost of misclassification, algorithm aversion

Suggested Citation

Kelley, Stephanie and Ovchinnikov, Anton and Ramolete, Gabriel Isaac and Sureshbabu, Keerthana Kashi and Heinrich, Adrienne, Tailoring Explainable Artificial Intelligence: User Preferences and Profitability Implications for Firms (December 14, 2022). Available at SSRN: https://ssrn.com/abstract=4305480 or http://dx.doi.org/10.2139/ssrn.4305480

Stephanie Kelley (Contact Author)

Ivey Business School, Western University ( email )

1255 Western Road
London, Ontario N6G0N1
Canada

Anton Ovchinnikov

Smith School of Business - Queen's University ( email )

143 Union Str. West
Kingston, ON K7L3N6
Canada

INSEAD - Decision Sciences ( email )

United States

Gabriel Isaac Ramolete

Aboitiz Data Innovation

Keerthana Kashi Sureshbabu

Aboitiz Data Innovation

Adrienne Heinrich

UnionBank of the Philippines ( email )

Metro Manila
Philippines

Aboitiz Data Innovation ( email )

Singapore
Singapore

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
37
Abstract Views
163
PlumX Metrics