Using Large Language Models to Explore Contextualization Effects in Economics-Based Accounting Experiments
127 Pages Posted: 17 Jul 2024 Last revised: 13 Dec 2024
Date Written: November 28, 2024
Abstract
Most economics-based experimental accounting research tests hypotheses about the behavior of professionals in business settings. There is an ongoing debate about whether the specific business context should be made explicit in experiments, or whether researchers should instead rely on abstract, stylized settings with neutrally framed instructions. In this paper, we argue that the effects of contextualization are highly idiosyncratic and therefore difficult to anticipate without empirical testing. We then conjecture that large language models (LLMs) can be helpful to researchers designing experiments. We introduce “botex”, a software tool that allows researchers to run pilot tests with simulated participants. We illustrate the use of botex in a series of experiments comparing neutral and contextualized versions of three widely-used experimental games. We find that contextualization has significant, but not always consistent or straightforward effects on participants’ behavior, highlighting the potential value of using LLMs in the experimental design.
Keywords: Framing, Large Language Models, Experiments, Games, Contextualization
JEL Classification: B49, C81, C88, C90, D90, M40, M50
Suggested Citation: Suggested Citation