To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis

Accepted for publication at Organization Science Special Issue on Theorizing Emerging Technologies

24 Pages Posted: 13 Nov 2019 Last revised: 13 Jan 2022

See all articles by Sarah Lebovitz

Sarah Lebovitz

University of Virginia - McIntire School of Commerce

Hila Lifshitz-Assaf

Harvard University Lab for Innovation Sciences; Harvard LISH, Lab for Innovation Sciences; University of Warwick, Warwick Business School

Natalia Levina

New York University

Date Written: October 5, 2021

Abstract

Artificial intelligence (AI) technologies promise to transform how professionals conduct knowledge work by augmenting their capabilities for making professional judgment. We know little, however, about how human-AI augmentation takes place in practice. Yet gaining this understanding is particularly important when professionals use AI tools to form judgments on critical decisions. We conducted an in-depth field study in a major US hospital where AI tools were used in three departments by diagnostic radiologists making breast cancer, lung cancer, and bone age determinations. The study illustrates the hindering effects of opacity that professionals experienced when using AI tools and explores how these professionals grappled with it in practice. In all three departments, this opacity resulted in professionals experiencing increased uncertainty because AI tool results often diverged from their initial judgment without providing underlying reasoning. Only in one department (of the three), did professionals consistently incorporate AI results into their final judgments, achieving what we call engaged augmentation. These professionals invested in AI interrogation practices – practices enacted by human experts to relate their own knowledge claims to AI knowledge claims. Professionals in the other two departments did not enact such practices and did not incorporate AI inputs into their final decisions, which we call un-engaged “augmentation.” Our study unpacks the challenges involved in augmenting professional judgment with powerful, yet opaque, technologies and contributes to literatures on AI adoption in knowledge work.

Keywords: artificial intelligence, opacity, technology adoption and use, professional judgment, uncertainty, innovation, medical diagnosis

Suggested Citation

Lebovitz, Sarah and Lifshitz-Assaf, Hila and Levina, Natalia, To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis (October 5, 2021). Accepted for publication at Organization Science Special Issue on Theorizing Emerging Technologies, Available at SSRN: https://ssrn.com/abstract=3480593

Sarah Lebovitz (Contact Author)

University of Virginia - McIntire School of Commerce ( email )

P.O. Box 400173
Charlottesville, VA 22904-4173
United States

Hila Lifshitz-Assaf

Harvard University Lab for Innovation Sciences ( email )

Soldiers Field Road
Cotting House 321A
Boston, MA 02163
United States

Harvard LISH, Lab for Innovation Sciences ( email )

William James Hall, Sixth Floor
33 Kirkland Street
Cambridge, MA 02138

University of Warwick, Warwick Business School ( email )

West Midlands, CV4 7AL
United Kingdom

HOME PAGE: http://https://www.hilalifshitz.com/

Natalia Levina

New York University ( email )

44 West Fourth Street
New York, NY 10012
United States

HOME PAGE: http://pages.stern.nyu.edu/~nlevina

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
472
Abstract Views
2,492
Rank
103,307
PlumX Metrics