A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?

33 Pages Posted: 10 Mar 2023 Last revised: 25 May 2023

See all articles by Yang Chen

Yang Chen

Queen's University - Smith School of Business

Meena Andiappan

University of Toronto

Tracy Jenkin

Queen's University - Smith School of Business

Anton Ovchinnikov

Smith School of Business - Queen's University; INSEAD - Decision Sciences

Date Written: May 20, 2023

Abstract

Large language models (LLMs) such as ChatGPT have garnered global attention recently, with a promise to disrupt and revolutionize business operations. As managers rely more on artificial intelligence (AI) technology, there is an urgent need to understand whether there are systematic biases in AI decision-making since they are trained with human data and feedback, and both may be highly biased. This paper tests a broad range of behavioral biases commonly found in humans that are especially relevant to operations management. We found that although ChatGPT can be much less biased and more accurate than humans in problems with explicit mathematical/probabilistic natures, it also exhibits many biases humans possess, especially when the problems are complicated, ambiguous, and implicit. It may suffer from conjunction bias and probability weighting. Its preference can be influenced by framing, the salience of anticipated regret, and the choice of reference. ChatGPT also struggles to process ambiguous information and evaluates risks differently from humans. It may also produce responses similar to heuristics employed by humans, and is prone to confirmation bias. To make these issues worse, ChatGPT is highly overconfident. Our research characterizes ChatGPT's behaviors in decision-making and showcases the need for researchers and businesses to consider potential AI behavioral biases when developing and employing AI for business operations.

Keywords: chatGPT, behavior, bias, decision-making, experiment, framing, overconfidence, ambiguity, prospect theory

Suggested Citation

Chen, Yang and Andiappan, Meena and Jenkin, Tracy and Ovchinnikov, Anton, A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do? (May 20, 2023). Available at SSRN: https://ssrn.com/abstract=4380365 or http://dx.doi.org/10.2139/ssrn.4380365

Yang Chen

Queen's University - Smith School of Business ( email )

Smith School of Business - Queen's University
143 Union Street
Kingston, Ontario K7L 3N6
Canada

Meena Andiappan

University of Toronto ( email )

105 St George Street
Toronto, M5S 3G8
Canada

Tracy Jenkin

Queen's University - Smith School of Business ( email )

Anton Ovchinnikov (Contact Author)

Smith School of Business - Queen's University ( email )

143 Union Str. West
Kingston, ON K7L3N6
Canada

INSEAD - Decision Sciences ( email )

United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
706
Abstract Views
1,954
Rank
59,626
PlumX Metrics