Errors in Physician-AI Collaboration: Insights From a Mixed-methods Study of Explainable AI and Trust in Clinical Decision-making

42 Pages Posted: 26 Mar 2024

Date Written: March 26, 2024

Abstract

Artificial Intelligence-based diagnostic systems are increasingly prominent in healthcare systems, although they face multiple cognitive challenges to their acceptance and effective use among healthcare professionals. One major issue is low trust among doctors in AI advice, especially when this advice appears “black box” lacking clear diagnostic explanations. However, another challenge is that doctors may blindly trust and accept incorrect AI diagnostics. Here we investigated doctors’ trust and decision-making errors in collaborating with AI and explainable AI in a field study of 11 physicians making 330 diagnostic decisions on recurrent ear infections. We calibrated the AI at 60% accuracy so to better differentiate trust and errors that emerge when AI is correct or incorrect, and either confirms or conflicts with doctors’ diagnoses. To deepen understanding of cognitive mechanisms, we performed “think-aloud” protocols during qualitative interviews where doctors describe their reasoning when using or discarding AI diagnostic advice. Turning first to accuracy, we found that physician-AI collaboration outperformed physician decision-making alone. However, accuracy was substantially reduced in scenarios when doctors lacked confidence in their decisions and shifted from a correct diagnosis to an incorrect AI diagnosis. In terms of uptake, physicians exhibited “stickiness” in their diagnostic decisions in about two-thirds of all cases, consistent with AI distrust and a potential commitment bias. Adding explanations with XAI did persuade more physicians to use it, but nonetheless about half of the doctors remained unchanged with the aide of XAI. Virtually none of the physicians altered their decisions when AI confirmed their incorrect diagnosis (a “false confirmation”), which accounted for two-thirds of all errors identified in our study. Our qualitative analysis showed that physicians neglected the possibility of AI error in cases of confirmation. We conclude with proposing an agenda for future research that could tap the power of cognitive psychology and explainable AI to improve physician-AI collaboration and balanced trust in AI applications in healthcare settings.

Note:

Funding Information: The author has not declared a specific grant for this research from any funding agency in the public, commercial, or not-for-profit sectors.

Conflict of Interests: None declared.

Informed Consent: All participants consented.

Keywords: Artificial Intelligence, Explainable AI, XAI, Decision-making, Clinical decision support systems, Healthcare, Dual process theory

Suggested Citation

Rosenbacke, Rikard, Errors in Physician-AI Collaboration: Insights From a Mixed-methods Study of Explainable AI and Trust in Clinical Decision-making (March 26, 2024). Available at SSRN: https://ssrn.com/abstract=4773350 or http://dx.doi.org/10.2139/ssrn.4773350

Rikard Rosenbacke (Contact Author)

Copenhagen Business School ( email )

Solbjerg Plads 3
Frederiksberg C, DK - 2000
Denmark

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
144
Abstract Views
778
Rank
431,006
PlumX Metrics