More Human than Human: Measuring ChatGPT Political Bias

Motoki, F., Pinho Neto, V., & Rodrigues, V. (2023). More human than human: Measuring ChatGPT political bias. Public Choice. https://doi.org/10.1007/s11127-023-01097-2

Posted: 5 Mar 2023 Last revised: 27 Sep 2023

See all articles by Fabio Yoshio Suguri Motoki

Fabio Yoshio Suguri Motoki

University of East Anglia (UEA) - Norwich Business School

Valdemar Pinho Neto

Brazilian School of Economics and Finance

Victor Rangel

Instituto de Ensino e Pesquisa (INSPER)

Date Written: July 18, 2023

Abstract

We investigate the political bias of a large language model (LLM), ChatGPT, which has become popular for retrieving factual information and generating content. Although ChatGPT assures that it is impartial, the literature suggests that LLMs exhibit bias involving race, gender, religion, and political orientation. Political bias in LLMs can have adverse political and electoral consequences similar to bias from traditional and social media. Moreover, political bias can be harder to detect and eradicate than gender or racial bias. We propose a novel empirical design to infer whether ChatGPT has political biases by requesting it to impersonate someone from a given side of the political spectrum and comparing these answers with its default. We also propose dose-response, placebo, and profession-politics alignment robustness tests. To reduce concerns about the randomness of the generated text, we collect answers to the same questions 100 times, with question order randomized on each round. We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK. These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media. Our findings have important implications for policymakers, media, politics, and academia stakeholders.

Keywords: Bias, Political bias, Large Language Models, ChatGPT

JEL Classification: C10, C89, D83, L86, Z00

Suggested Citation

Motoki, Fabio and Pinho Neto, Valdemar and Rangel, Victor, More Human than Human: Measuring ChatGPT Political Bias (July 18, 2023). Motoki, F., Pinho Neto, V., & Rodrigues, V. (2023). More human than human: Measuring ChatGPT political bias. Public Choice. https://doi.org/10.1007/s11127-023-01097-2, Available at SSRN: https://ssrn.com/abstract=4372349 or http://dx.doi.org/10.2139/ssrn.4372349

Fabio Motoki (Contact Author)

University of East Anglia (UEA) - Norwich Business School ( email )

Norwich
NR4 7TJ
United Kingdom

HOME PAGE: http://research-portal.uea.ac.uk/en/persons/fabio-motoki

Valdemar Pinho Neto

Brazilian School of Economics and Finance ( email )

RJ 22250
Rio de Janeiro
Brazil

Victor Rangel

Instituto de Ensino e Pesquisa (INSPER) ( email )

São Paulo
Brazil

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
4,648
PlumX Metrics