We Need to Talk: Audio Surveys and Information Extraction
52 Pages Posted: 4 Dec 2024 Last revised: 7 May 2025
There are 2 versions of this paper
We Need to Talk: Audio Surveys and Information Extraction
Abstract
Understanding individuals' beliefs, preferences, and motivations is essential in social sciences. Recent technological advancementsnotably, large language models (LLMs) for analyzing open-ended responses and the diffusion of voice messaging have the potential to significantly enhance our ability to elicit these dimensions. This study investigates the differences between oral and written responses to open-ended survey questions. Through a series of randomized controlled trials across three surveys (focused on AI, public policy, and international relations), we assigned respondents to answer either by audio or text. Respondents who provided audio answers gave longer, though lexically simpler, responses compared to those who typed. By leveraging LLMs, we evaluated answer informativeness and found that oral responses differ in both quantity and quality, offering more information and containing more personal experiences than written responses. These findings suggest that oral responses to open-ended questions can capture richer, more personal insights, presenting a valuable method for understanding individual reasoning.
Keywords: survey design, open-ended questions, large language models, beliefs
JEL Classification: C83, D83
Suggested Citation: Suggested Citation