Judgments of Research Co-Created by Generative AI: Experimental Evidence

17 Pages Posted: 22 May 2023

See all articles by Paweł Niszczota

Paweł Niszczota

Poznań University of Economics and Business - Humans & AI Laboratory (HAI Lab)

Paul Conway

University of Southampton

Date Written: May 3, 2023

Abstract

The introduction of ChatGPT has fueled a public debate on the use of generative AI (large language models; LLMs), including its use by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust and devalue researchers and scientific output. Participants (N=402) considered a researcher who delegates elements of the research process to a PhD student or LLM, and rated (1) moral acceptability, (2) trust in the scientist to oversee future projects, and (3) the accuracy and quality of the output. People judged delegating to an LLM as less acceptable than delegating to a human (d = -0.78). Delegation to an LLM also decreased trust to oversee future research projects (d = -0.80), and people thought the results would be less accurate and of lower quality (d = -0.85). We discuss how this devaluation might transfer into the underreporting of generative AI use.

Keywords: trust in science, metascience, ChatGPT, GPT, large language models, generative AI, experiment

JEL Classification: G00, J24, O33, O34

Suggested Citation

Niszczota, Paweł and Conway, Paul, Judgments of Research Co-Created by Generative AI: Experimental Evidence (May 3, 2023). Available at SSRN: https://ssrn.com/abstract=4443934 or http://dx.doi.org/10.2139/ssrn.4443934

Paweł Niszczota (Contact Author)

Poznań University of Economics and Business - Humans & AI Laboratory (HAI Lab) ( email )

al. Niepodległości 10
Poznań, 61-875
Poland

Paul Conway

University of Southampton ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
25
Abstract Views
141
PlumX Metrics