Do People Trust Humans More Than ChatGPT?

23 Pages Posted: 29 Nov 2023 Last revised: 5 Dec 2023

See all articles by Joy Buchanan

Joy Buchanan

Samford University

William Hickman

George Mason University - Interdisciplinary Center for Economic Science (ICES)

Date Written: November 16, 2023

Abstract

We explore whether people trust the accuracy of statements produced by large language models (LLMs) versus those written by humans. While LLMs have showcased impressive capabilities in generating text, concerns have been raised regarding the potential for misinformation, bias, or false responses. In this experiment, participants rate the accuracy of statements under different information conditions. Participants who are not explicitly informed of authorship tend to trust statements they believe are human-written more than those attributed to ChatGPT. However, when informed about authorship, participants show equal skepticism towards both human and AI writers. There is an increase in the rate of costly fact-checking by participants who are explicitly informed. These outcomes suggest that trust in AI-generated content is context-dependent.

Keywords: Artificial intelligence, machine learning, trust, belief

JEL Classification: O33, C91, D8

Suggested Citation

Buchanan, Joy and Hickman, William, Do People Trust Humans More Than ChatGPT? (November 16, 2023). GMU Working Paper in Economics No. 23-38, Available at SSRN: https://ssrn.com/abstract=4635674 or http://dx.doi.org/10.2139/ssrn.4635674

Joy Buchanan (Contact Author)

Samford University ( email )

800 Lakeshore Drive
Birmingham, AL 35229
United States

William Hickman

George Mason University - Interdisciplinary Center for Economic Science (ICES) ( email )

Vernon Smith Hall, 3434 Washington Blvd. North, 5t
Arlington, VA 22201
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
571
Abstract Views
2,079
Rank
101,399
PlumX Metrics