Human Favoritism, Not AI Aversion: People’s Perceptions (and Bias) Toward Generative AI, Human Experts, and Human-GAI Collaboration in Persuasive Content Generation

83 Pages Posted: 25 May 2023 Last revised: 21 Sep 2023

See all articles by Yunhao Zhang

Yunhao Zhang

Massachusetts Institute of Technology (MIT) - Sloan School of Management; University of California, Berkeley - Haas School of Business

Renee Gosline

Massachusetts Institute of Technology (MIT) - Sloan School of Management

Date Written: May 20, 2023

Abstract

With the wide availability of Large Language Models and generative AI, there are four primary paradigms for Human-AI collaboration: human only, AI only (ChatGPT-4), augmented human (where a human making the final decision with AI output as a reference), or augmented AI (where the AI making the final decision with human output as a reference). In partnership with one of the world’s leading consulting firms, we enlist professional content creators and ChatGPT-4 to create advertising content for products and persuasive content for campaigns following the aforementioned paradigms. First, we find that, contrary to the expectations of existing algorithm aversion literature on conventional predictive AI, content generated by generative AI and augmented AI is perceived as of higher quality than that produced by human experts and augmented human experts. Second, revealing the source of content production reduces – but does not reverse – the perceived quality gap between human- and AI-generated content. This bias in evaluation is predominantly driven by human favoritism rather than AI aversion: knowing the same content is created by a human expert increases its (reported) perceived quality, but knowing that AI is involved in the creation process does not affect its perceived quality. Further analysis suggests this bias is not due to a “quality prime” as knowing the content they are about to evaluate comes from competent creators (e.g., industry professionals and state-of-the-art AI) without knowing exactly the creator of each piece of content does not increase participants’ perceived quality.

Keywords: Large Language Models, Artificial Intelligence, Consumer Perception, Consumer Bias

Suggested Citation

Zhang, Yunhao and Gosline, Renee, Human Favoritism, Not AI Aversion: People’s Perceptions (and Bias) Toward Generative AI, Human Experts, and Human-GAI Collaboration in Persuasive Content Generation (May 20, 2023). Available at SSRN: https://ssrn.com/abstract=4453958 or http://dx.doi.org/10.2139/ssrn.4453958

Yunhao Zhang (Contact Author)

Massachusetts Institute of Technology (MIT) - Sloan School of Management ( email )

100 Main Street
E62-416
Cambridge, MA 02142
United States

HOME PAGE: http://https://mitsloan.mit.edu/phd/students/yunhao-zhang

University of California, Berkeley - Haas School of Business ( email )

545 Student Services Building, #1900
2220 Piedmont Avenue
Berkeley, CA 94720
United States

Renee Gosline

Massachusetts Institute of Technology (MIT) - Sloan School of Management ( email )

100 Main Street
E62-416
Cambridge, MA 02142
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
2,884
Abstract Views
12,765
Rank
540,243
PlumX Metrics