Foundation Models Are Exciting, but They Should Not Disrupt the Foundations of Caring

9 Pages Posted: 19 May 2023

See all articles by Jessica Morley

Jessica Morley

Yale University - Digital Ethics Center

Luciano Floridi

Yale University - Digital Ethics Center; University of Bologna- Department of Legal Studies

Date Written: April 20, 2023

Abstract

The arrival of Foundation Models in general, and Large Language Models (LLMs) in particular, capable of ‘passing’ medical qualification exams at or above a human level, has sparked a new wave of ‘the chatbot will see you now’ hype. It is exciting to witness such impressive technological progress, and LLMs have the potential to benefit healthcare systems, providers, and patients. However, these benefits are unlikely to be realised by propagating the myth that, just because LLMs are sometimes capable of passing medical exams, they will ever be capable of supplanting any of the main diagnostic, prognostic, or treatment tasks of a human clinician. Contrary to popular discourse, LLMs are not necessarily more efficient, objective, or accurate than human healthcare providers. They are vulnerable to errors in underlying ‘training’ data and prone to ‘hallucinating’ false information rather than facts. Moreover, there are nuanced, qualitative, or less measurable reasons why it is prudent to be mindful of hyperbolic claims regarding the transformative power ofLLMs. Here we discuss these reasons, including contextualisation, empowerment, learned intermediaries, manipulation, and empathy. We conclude that overstating the current potential of LLMs does a disservice to the complexity of healthcare and the skills of healthcare practitioners and risks a ‘costly’ new AI winter. A balanced discussion recognising the potential benefits and limitations can help avoid this outcome.

Note:
Funding Information: JM is the recipient of a Doctoral Studentship from the Wellcome Trust.

Conflict of Interests: The authors declare no conflicts of interest.

Keywords: Artificial Intelligence, Large Language Models, Ethics, Healthcare, Data Ethics, ChatGPT

Suggested Citation

Morley, Jessica and Floridi, Luciano, Foundation Models Are Exciting, but They Should Not Disrupt the Foundations of Caring (April 20, 2023). Available at SSRN: https://ssrn.com/abstract=4424821 or http://dx.doi.org/10.2139/ssrn.4424821

Jessica Morley (Contact Author)

Yale University - Digital Ethics Center ( email )

85, Trumbull Street
New Haven, CT 06511
United States

Luciano Floridi

Yale University - Digital Ethics Center ( email )

85 Trumbull Street
New Haven, CT CT 06511
United States
2034326473 (Phone)

University of Bologna- Department of Legal Studies ( email )

Via Zamboni 22
Bologna, Bo 40100
Italy

HOME PAGE: http://www.unibo.it/sitoweb/luciano.floridi/en

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
767
Abstract Views
3,391
Rank
66,211
PlumX Metrics