Discovery of Bias and Strategic Behavior in Crowdsourced Performance Assessment
6 Pages Posted: 6 May 2016 Last revised: 12 Aug 2019
Date Written: January 1, 2017
Abstract
With the industry trend of shifting from a traditional hierarchical approach to flatter management structure, crowdsourced performance assessment gained mainstream popularity. One fundamental challenge of crowdsourced performance assessment is the risks that personal interest can introduce distortions of facts, especially when the system is used to determine merit pay or promotion. In this paper, we developed a method to identify bias and strategic behavior in crowdsourced performance assessment, using a rich dataset collected from a professional service firm in China. We find a pattern of “discriminatory generosity” on the part of peer evaluation, where raters downgrade their peer coworkers who have passed objective promotion requirements, while overrating their peer coworkers who have not yet passed. This introduces two types of biases: the first aimed against more competent competitors, and the other favoring less eligible peers which can serve as a mask of the first bias. This paper also aims to bring angles of fairness-aware data mining to talent and management computing. Historical decision records, such as performance ratings, often contain subjective judgement which is prone to bias and strategic behavior. For practitioners of predictive talent analytics, it is important to investigate potential bias and strategic behavior underlying historical decision records.
Keywords: peer performance evaluation, strategic manipulation, personnel economics, sabotage, envy, field data
Suggested Citation: Suggested Citation