Can ChatGPT Decipher Fedspeak?
28 Pages Posted: 7 Apr 2023 Last revised: 11 Apr 2024
Date Written: April 10, 2024
Abstract
This paper investigates the ability of Generative Pre-training Transformer (GPT) models to decipher Fedspeak, a term used to describe the technical language used by the Federal Reserve to communicate on monetary policy decisions. We evaluate the ability of GPT models to classify the policy stance of Federal Open Market Committee announcements relative to human assessment. We show that GPT models deliver a considerable improvement in classification performance over other commonly used methods. We then demonstrate how the GPT-4 model can provide explanations for its classifications that are on par with human reasoning. Finally, we show that the GPT-4 model can be used to identify macroeconomic shocks using the narrative approach of Romer and Romer (1989, 2023).
Keywords: Natural Language Processing (NLP), Generative Pre-training Transformer (GPT), Federal Reserve Communication, Applications, Artificial Intelligence (AI)
JEL Classification: E52, E58, C88
Suggested Citation: Suggested Citation