Fairness in Machine Learning: Lessons from Political Philosophy
Conference on Fairness, Accountability, and. Transparency, New York, Forthcoming
Proceedings of Machine Learning Research, Vol. 81, p. 1–11, Forthcoming
11 Pages Posted: 14 Dec 2017
Date Written: December 8, 2017
Abstract
What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.
Keywords: fairness, discrimination, machine learning, algorithmic decision-making, egalitarianism
Suggested Citation: Suggested Citation