Can LLMs Capture Human Preferences?

33 Pages Posted: 12 May 2023 Last revised: 20 Feb 2024

See all articles by Ali Goli

Ali Goli

University of Washington - Michael G. Foster School of Business

Amandeep Singh

University of Washington - Michael G. Foster School of Business; University of Pennsylvania - The Wharton School

Date Written: May 4, 2023

Abstract

We explore the viability of Large Language Models (LLMs), specifically OpenAI's GPT-3.5 and GPT-4, in emulating human survey respondents and eliciting preferences, with a focus on intertemporal choices. Leveraging the extensive literature on intertemporal discounting for benchmarking, we examine responses from LLMs across various languages and compare them to human responses, exploring preferences between smaller, sooner, and larger, later rewards. Our findings reveal that both GPT models demonstrate less patience than humans, with GPT-3.5 exhibiting a lexicographic preference for earlier rewards, unlike human decision-makers. Though GPT-4 does not display lexicographic preferences, its measured discount rates are still considerably larger than those found in humans. Interestingly, GPT models show greater patience in languages with weak future tense references, such as German and Mandarin, aligning with existing literature that suggests a correlation between language structure and intertemporal preferences. We demonstrate how prompting GPT to explain its decisions, a procedure we term "chain-of-thought conjoint," can mitigate, but does not eliminate, discrepancies between LLM and human responses. While directly eliciting preferences using LLMs may yield misleading results, combining chain-of-thought conjoint with topic modeling aids in hypothesis generation, enabling researchers to explore the underpinnings of preferences. Chain-of-thought conjoint provides a structured framework for marketers to use LLMs to identify potential attributes or factors that can explain preference heterogeneity across different customers and contexts.

Keywords: Large language models, Intertemporal preferences, GPT, Generative AI, LLMs, Chain-of-Thought, Conjoint Analysis

JEL Classification: D83, D91, O33, P46, Z13

Suggested Citation

Goli, Ali and Singh, Amandeep, Can LLMs Capture Human Preferences? (May 4, 2023). Available at SSRN: https://ssrn.com/abstract=4437617 or http://dx.doi.org/10.2139/ssrn.4437617

Ali Goli (Contact Author)

University of Washington - Michael G. Foster School of Business ( email )

Box 353200
Seattle, WA 98195-3200
United States

Amandeep Singh

University of Washington - Michael G. Foster School of Business ( email )

Box 353200
Seattle, WA 98195-3200
United States

University of Pennsylvania - The Wharton School ( email )

3641 Locust Walk
Philadelphia, PA 19104-6365
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
369
Abstract Views
2,177
Rank
157,582
PlumX Metrics