Racial Influence on Automated Perceptions of Emotions
11 Pages Posted: 6 Dec 2018 Last revised: 17 Dec 2018
Date Written: November 9, 2018
The practical applications of artificial intelligence are expanding into various elements of society, leading to a growing interest in the potential biases of such algorithms. Facial analysis, one application of artificial intelligence, is increasingly used in real-word situations. For example, some organizations tell candidates to answer predefined questions in a recorded video and use facial recognition to analyze the potential applicant faces. In addition, some companies are developing facial recognition software to scan the faces in crowds and assess threats, specifically mentioning doubt and anger as emotions that indicate threats.
This study provides evidence that facial recognition software interprets emotions differently based on the person’s race. Using a publically available data set of professional basketball players’ pictures, I compare the emotional analysis from two different facial recognition services, Face and Microsoft's Face API. Both services interpret black players as having more negative emotions than white players; however, there are two different mechanisms. Face consistently interprets black players as angrier than white players, even controlling for their degree of smiling. Microsoft registers contempt instead of anger, and it interprets black players as more contemptuous when their facial expressions are ambiguous. As the players’ smile widens, the disparity disappears.
This finding has implications for individuals, organizations, and society, and it contributes to the growing literature of bias and/or disparate impact in AI.
Keywords: artificial intelligence, race, bias, econometrics, coarsened exact matching, facial recognition
Suggested Citation: Suggested Citation