Goal Orientation for Fair Machine Learning Algorithms
Posted: 23 Dec 2022
Date Written: December 12, 2022
A key challenge facing the use of Machine Learning (ML) in operational selection settings (e.g., the processing of loan or job applications) is the potential bias against (racial and gender) minorities. To address this challenge, a rich literature of Fairness-Aware ML (FAML) algorithms has emerged, attempting to ameliorate biases while maintaining the predictive accuracy of ML algorithms. Almost all existing FAML algorithms define their optimization goals according to a selection task, meaning that ML outputs are assumed to be the final selection outcome. In practice, though, ML outputs are rarely used as-is. In personnel selection, for example, ML often serves a support role to human resource managers, allowing them to more easily exclude unqualified applicants. This effectively assigns to ML a screening rather than selection task. It might be tempting to treat selection and screening as two variations of the same task that differ only quantitatively on the selection rate (i.e., the percentage of applications to pass). This paper, however, reveals a qualitative difference between the two in terms of fairness. Specifically, we demonstrate through conceptual development and mathematical analysis that mis-categorizing a screening task as a selection one could result in a suite of fairness problems, from exacerbating between-group differences in selection quality to creating selection biases within the minority group. After validating our findings with empirical evidence, we discuss several business and policy implications, highlighting the need for firms and policymakers to properly categorize the task assigned to ML in assessing and correcting algorithmic biases.
Keywords: Fairness, machine learning, optimization goal, selection, screening
JEL Classification: M15, M14, M51
Suggested Citation: Suggested Citation