Anchored to Bias: How AI-Human Scoring Can Induce and Reduce Bias Due to the Anchoring Effect
39 Pages Posted: 10 Dec 2019
Date Written: November 22, 2019
As artificial intelligence (AI) is increasingly embedded in organizational decisions, firms are responsible for ensuring that AI and its implementation does not lead to disparities in outcomes for different groups of people. In the U.S., it is illegal to differentiate among job applicants and employees based on protected attributes such as gender and ethnicity. The objective of this paper is to examine how biased artificial intelligence influences individual decision-makers. Using data from multiple commercially available emotional intelligence platforms, we find evidence of disparities in emotion scores across a protected attribute, race. Next, we conduct an experiment on Amazon Mechanical Turk (AMT) to assess whether the AI and human emotion scores from the first part induce disparities in respondents’ scores about emotions due to the anchoring effect. We find evidence of anchoring and selective anchoring. We also examine if reporting the average disparity between different groups interrupts the anchoring effect and reduces bias. This paper contributes to the information system literature by building upon the literature on algorithmic bias, the anchoring effect, and AI-human hybrid decisions.
Keywords: Algorithmic Bias, Ethics in AI, Discrimination, Artificial Intelligence, Managing Artificial Intelligence
Suggested Citation: Suggested Citation