Evaluating Racial Bias in Large Language Models: The Necessity for "SMOKY"
International Journal on Soft Computing (IJSC) Vol.15, No.3, August 2024
15 Pages Posted: 2 Jul 2024
Date Written: June 26, 2024
Abstract
This paper evaluates the understanding and biases of large language models (LLMs) regarding racism by comparing their responses to those of prominent African-centered scholar, Dr. Frances Cress Welsing. The study identifies racial biases in LLMs, illustrating the critical need for specialized AI systems like "Smoky,"[1] designed to address systemic racism with a foundation in African-centered scholarship. By highlighting disparities and potential biases in LLM responses, the research aims to contribute to the development of more culturally aware and contextually sensitive AI systems. This comparative analysis underscores the necessity for integrating African-centered perspectives in AI development to dismantle white supremacy and promote social justice.
Keywords: African-centered Language Model (LLM), Systemic Racism, White Supremacy, Social Justice, Artificial Intelligence (AI) Algorithms, Knowledge Representation, Neural Networks, Data Mining, Machine Learning, Information Retrieval, Natural Language Processing (NLP), Multimedia Analysis, Pattern Recognition, Cognitive Informatics, Planetary Chess, Counter-racist Scholars, Biases in AI, Hybrid Intelligent Systems, Evolutionary Computing, Technical Challenges in AI Development
Suggested Citation: Suggested Citation