Evaluating Gender, Racial, and Age Biases in Large Language Models: A Comparative Analysis of Occupational and Crime Scenarios
Accepted at IEEE CAI(Conference on Artificial Intelligence) 2025
12 Pages Posted: 28 Mar 2025 Last revised: 30 Mar 2025
Date Written: September 22, 2024
Abstract
Recent advancements in Large Language Models (LLMs) have been notable, yet widespread enterprise adoption remains limited due to various constraints. This paper examines bias in LLMs-a crucial issue affecting their usability, reliability, and fairness. Researchers are developing strategies to mitigate bias, including debiasing layers, specialized reference datasets like Winogender and Winobias, and reinforcement learning with human feedback (RLHF). These techniques have been integrated into the latest LLMs. Our study evaluates gender bias in occupational scenarios and gender, age, and racial bias in crime scenarios across four leading LLMs released in 2024: Gemini 1.5 Pro, Llama 3 70B, Claude 3 Opus, and GPT-4o. Findings reveal that LLMs often depict female characters more frequently than male ones in various occupations, showing a 37% deviation from US BLS data. In crime scenarios, deviations from US FBI data are 54% for gender, 28% for race, and 17% for age. We observe that efforts to reduce gender and racial bias often lead to outcomes that may over-index one sub-class, potentially exacerbating the issue. These results highlight the limitations of current bias mitigation techniques and underscore the need for more effective approaches.
Keywords: Fairness, Bias, Ethical AI, LLMs, Large Language Models, Explainability, Societal impacts of Machine Learning, NLP, Chat GPT-4o, Gemini 1.5 Pro, Llama 3 70B, Claude 3 Opus, Gender, Racial bias, Responsible AI
Suggested Citation: Suggested Citation