Evaluating Gender, Racial, and Age Biases in Large Language Models: A Comparative Analysis of Occupational and Crime Scenarios

Accepted at IEEE CAI(Conference on Artificial Intelligence) 2025

12 Pages Posted: 28 Mar 2025 Last revised: 30 Mar 2025

See all articles by Vishal Mirza

Vishal Mirza

New York University

Rahul Kulkarni

Northeastern Univeristy

Aakanksha Jadhav

Washington University in Saint Louis, John M. Olin Business School

Date Written: September 22, 2024

Abstract

Recent advancements in Large Language Models (LLMs) have been notable, yet widespread enterprise adoption remains limited due to various constraints. This paper examines bias in LLMs-a crucial issue affecting their usability, reliability, and fairness. Researchers are developing strategies to mitigate bias, including debiasing layers, specialized reference datasets like Winogender and Winobias, and reinforcement learning with human feedback (RLHF). These techniques have been integrated into the latest LLMs. Our study evaluates gender bias in occupational scenarios and gender, age, and racial bias in crime scenarios across four leading LLMs released in 2024: Gemini 1.5 Pro, Llama 3 70B, Claude 3 Opus, and GPT-4o. Findings reveal that LLMs often depict female characters more frequently than male ones in various occupations, showing a 37% deviation from US BLS data. In crime scenarios, deviations from US FBI data are 54% for gender, 28% for race, and 17% for age. We observe that efforts to reduce gender and racial bias often lead to outcomes that may over-index one sub-class, potentially exacerbating the issue. These results highlight the limitations of current bias mitigation techniques and underscore the need for more effective approaches.

Keywords: Fairness, Bias, Ethical AI, LLMs, Large Language Models, Explainability, Societal impacts of Machine Learning, NLP, Chat GPT-4o, Gemini 1.5 Pro, Llama 3 70B, Claude 3 Opus, Gender, Racial bias, Responsible AI

Suggested Citation

Mirza, Vishal and Kulkarni, Rahul and Jadhav, Aakanksha, Evaluating Gender, Racial, and Age Biases in Large Language Models: A Comparative Analysis of Occupational and Crime Scenarios (September 22, 2024). Accepted at IEEE CAI(Conference on Artificial Intelligence) 2025, Available at SSRN: https://ssrn.com/abstract=5173673 or http://dx.doi.org/10.2139/ssrn.5173673

Vishal Mirza (Contact Author)

New York University ( email )

Rahul Kulkarni

Northeastern Univeristy ( email )

Aakanksha Jadhav

Washington University in Saint Louis, John M. Olin Business School ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
11
Abstract Views
135
PlumX Metrics