What's Fair About Individual Fairness?
Fleisher, W. What’s Fair about Individual Fairness. AIES '21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (July 2021) Pages 480–490. https://doi.org/10.1145/3461702.3462621
25 Pages Posted: 6 Apr 2021 Last revised: 1 Nov 2021
Date Written: July 1, 2021
One of the main lines of research in algorithmic fairness involves individual fairness (IF) methods. Individual fairness is motivated by an intuitive principle I call "similar treatment," which requires that similar individuals be treated similarly. IF offers a precise account of this definition using distance metrics to evaluate the similarity of individuals. Proponents of individual fairness have argued that it gives the correct definition of algorithmic fairness, and that it should therefore be preferred to other methods for determining fairness. I argue that individual fairness cannot serve as a definition of fairness. Moreover, IF methods should not be given priority over other fairness methods, nor used in isolation from them. To support these conclusions, I describe four in-principle problems for individual fairness as a definition and as a method for ensuring fairness: (1) counterexamples show that similar treatment (and therefore IF) are insufficient to guarantee fairness; (2) IF methods for learning similarity metrics are at risk of encoding human implicit bias; (3) IF requires prior moral judgments, limiting its usefulness as a guide for fairness and undermining its claim to define fairness; and (4) the incommensurability of relevant moral values makes similarity metrics impossible for many tasks. In light of these limitations, I suggest that individual fairness cannot be a definition of fairness, and instead should be seen as one tool among many for ameliorating algorithmic bias.
Keywords: Algorithmic fairness, Individual fairness, Ethics of AI, Incommensurable values
Suggested Citation: Suggested Citation