Why Providing Humans with Interpretable Algorithms May, Counterintuitively, Lead to Lower Decision-making Performance

39 Pages Posted: 13 Oct 2022 Last revised: 12 Dec 2022

See all articles by Timothy DeStefano

Timothy DeStefano

Harvard University - Business School (HBS)

Katherine Kellogg

Massachusetts Institute of Technology (MIT) - Sloan School of Management

Michael Menietti

Harvard University - Business School (HBS)

Luca Vendraminelli

Polytechnic University of Milan

Date Written: October 12, 2022

Abstract

How is algorithmic model interpretability related to human acceptance of algorithmic recommendations and performance on decision-making tasks? We explored these questions in a multi-method field study of a large multinational fashion organization. We first conducted a quantitative field experiment to compare the use of two models—an interpretable versus an uninterpretable algorithmic model— designed to assist employees with decision making around how many products to send to each of its stores. Contrary to what the literature on interpretable algorithms would lead us to expect, under conditions of high perceived uncertainty, decision makers’ use of an uninterpretable algorithmic model was associated with higher acceptance of algorithmic recommendations and higher task performance than was their use of an interpretable algorithmic model with a similar level of performance. We next investigated this puzzling result using 31 interviews with 14 employees—2 algorithm developers, 2 managers, and 10 decision makers. We advance two concepts that suggest a refinement of theory on interpretable algorithms. First, overconfident troubleshooting—a decision maker rejecting a recommendation coming from an interpretable algorithm, because of their belief that they understand the inner workings of complex processes better than they actually do. Second, social proofing the algorithm—including respected peers in the algorithm development and testing process—may make it more likely that decision makers accept recommendations coming from an uninterpretable algorithm in situations characterized by high perceived uncertainty, because the decision makers may seek to reduce their uncertainty by incorporating the opinions of people with their own knowledge base and experience.

Keywords: Interpretable AI, Artificial Intelligence, Machine Learning, Algorithm Aversion, AI Adoption, Firm Productivity, AI and Strategy, Human-in-the-loop Decision Making

Suggested Citation

DeStefano, Timothy and Kellogg, Katherine and Menietti, Michael and Vendraminelli, Luca, Why Providing Humans with Interpretable Algorithms May, Counterintuitively, Lead to Lower Decision-making Performance (October 12, 2022). MIT Sloan Research Paper No. 6797, 2022, Available at SSRN: https://ssrn.com/abstract=4246077 or http://dx.doi.org/10.2139/ssrn.4246077

Timothy DeStefano

Harvard University - Business School (HBS) ( email )

Soldiers Field Road
Morgan 270C
Boston, MA 02163
United States

Katherine Kellogg (Contact Author)

Massachusetts Institute of Technology (MIT) - Sloan School of Management ( email )

100 Main Street
E62-416
Cambridge, MA 02142
United States

Michael Menietti

Harvard University - Business School (HBS) ( email )

Soldiers Field Road
Morgan 270C
Boston, MA 02163
United States

Luca Vendraminelli

Polytechnic University of Milan

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
778
Abstract Views
4,806
Rank
64,086
PlumX Metrics