A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?
33 Pages Posted: 10 Mar 2023 Last revised: 25 May 2023
Date Written: May 20, 2023
Large language models (LLMs) such as ChatGPT have garnered global attention recently, with a promise to disrupt and revolutionize business operations. As managers rely more on artificial intelligence (AI) technology, there is an urgent need to understand whether there are systematic biases in AI decision-making since they are trained with human data and feedback, and both may be highly biased. This paper tests a broad range of behavioral biases commonly found in humans that are especially relevant to operations management. We found that although ChatGPT can be much less biased and more accurate than humans in problems with explicit mathematical/probabilistic natures, it also exhibits many biases humans possess, especially when the problems are complicated, ambiguous, and implicit. It may suffer from conjunction bias and probability weighting. Its preference can be influenced by framing, the salience of anticipated regret, and the choice of reference. ChatGPT also struggles to process ambiguous information and evaluates risks differently from humans. It may also produce responses similar to heuristics employed by humans, and is prone to confirmation bias. To make these issues worse, ChatGPT is highly overconfident. Our research characterizes ChatGPT's behaviors in decision-making and showcases the need for researchers and businesses to consider potential AI behavioral biases when developing and employing AI for business operations.
Keywords: chatGPT, behavior, bias, decision-making, experiment, framing, overconfidence, ambiguity, prospect theory
Suggested Citation: Suggested Citation