Can ChatGPT Decipher Fedspeak?

23 Pages Posted: 7 Apr 2023 Last revised: 7 Jun 2023

See all articles by Anne Lundgaard Hansen

Anne Lundgaard Hansen

Federal Reserve Bank of Richmond - Quantitative Supervision & Research

Sophia Kazinnik

Federal Reserve Banks - Quantitative Supervision & Research

Date Written: March 24, 2023

Abstract

This paper investigates the ability of Generative Pre-training Transformer (GPT) models to decipher Fedspeak, a term used to describe the technical language used by the Federal Reserve to communicate on monetary policy decisions. We evaluate the ability of GPT models to classify the policy stance of Federal Open Market Committee announcements relative to human assessment. We show that GPT models deliver a considerable improvement in classification performance over other commonly used methods. Furthermore, we show that GPT-4 model demonstrates reasoning abilities on par with humans in this domain.

Keywords: Natural Language Processing (NLP), Generative Pre-training Transformer (GPT), Federal Reserve Communication, Applications, Artificial Intelligence (AI)

JEL Classification: E52, E58, C88

Suggested Citation

Hansen, Anne Lundgaard and Kazinnik, Sophia, Can ChatGPT Decipher Fedspeak? (March 24, 2023). Available at SSRN: https://ssrn.com/abstract=4399406 or http://dx.doi.org/10.2139/ssrn.4399406

Anne Lundgaard Hansen

Federal Reserve Bank of Richmond - Quantitative Supervision & Research ( email )

United States
7734407018 (Phone)

HOME PAGE: http://sites.google.com/view/anneh/

Sophia Kazinnik (Contact Author)

Federal Reserve Banks - Quantitative Supervision & Research ( email )

United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
7,170
Abstract Views
17,543
Rank
1,554
PlumX Metrics