Implicit Bias in Large Language Models: Experimental Proof and Implications for Education

54 Pages Posted: 29 Nov 2023

See all articles by Melissa Warr

Melissa Warr

New Mexico State University

Nicole Jakubczyk Oster

Arizona State University (ASU)

Roger Isaac

New Mexico State University

Date Written: November 6, 2023

Abstract

We provide experimental evidence of implicit racial bias in a large language model (specifically ChatGPT) in the context of an authentic educational task and discuss implications for the use of these tools in educational contexts. Specifically, we presented ChatGPT with identical student writing passages alongside various descriptions of student demographics, include race, socioeconomic status, and school type. Results indicated that when directly questioned about race, the model produced higher overall scores than responses to a control prompt, but scores given to student descriptors of Black and White were not significantly different. However, this result belied a subtler form of prejudice that was statistically significant when racial indicators were implied rather than explicitly stated. Additionally, our investigation uncovered subtle sequence effects that suggest the model is attempting to infer user intentions and adapt responses accordingly. The evidence indicates that despite the implementation of guardrails by developers, biases are profoundly embedded in LLMs, reflective of both the training data and societal biases at large. While overt biases can be addressed to some extent, the more ingrained implicit biases present a greater challenge for the application of these technologies in education. It is critical to develop an understanding of the bias embedded in these models and how this bias presents itself in educational contexts before using LLMs to develop personalized learning tools.

Keywords: Generative AI, large language models, critical technology studies, systemic bias, systemic inequity

Suggested Citation

Warr, Melissa and Oster, Nicole Jakubczyk and Isaac, Roger, Implicit Bias in Large Language Models: Experimental Proof and Implications for Education (November 6, 2023). Available at SSRN: https://ssrn.com/abstract=4625078 or http://dx.doi.org/10.2139/ssrn.4625078

Melissa Warr (Contact Author)

New Mexico State University ( email )

Las Cruces, NM NM 88003
United States

HOME PAGE: http://melissa-warr.com

Nicole Jakubczyk Oster

Arizona State University (ASU) ( email )

Farmer Building 440G PO Box 872011
Tempe, AZ 85287
United States

Roger Isaac

New Mexico State University ( email )

Las Cruces, NM NM 88003
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
141
Abstract Views
490
Rank
362,655
PlumX Metrics