Eliminating Latent Discrimination: Train Then Mask

10 Pages Posted: 3 Jan 2019

Date Written: January 3, 2019

Abstract

How can we control for latent discrimination in predictive models? How can we provably remove it? Such questions are at the heart of algorithmic fairness and its impacts on society. In this paper, we define a new operational fairness criteria, inspired by the well-understood notion of omitted variable-bias in statistics and econometrics. Our notion of fairness effectively controls for sensitive features and provides diagnostics for deviations from fair decision making. We then establish analytical and algorithmic results about the existence of a fair classifier in the context of supervised learning. Our results readily imply a simple, but rather counter-intuitive, strategy for eliminating latent discrimination. In order to prevent other features proxying for sensitive features, we need to include sensitive features in the training phase, but exclude them in the test/evaluation phase while controlling for their effects. We evaluate the performance of our algorithm on several real-world datasets and show how fairness for these datasets can be improved with a very small loss in accuracy.

Suggested Citation

Ghili, Soheil and Kazemi, Ehsan and Karbasi, Amin, Eliminating Latent Discrimination: Train Then Mask (January 3, 2019). Cowles Foundation Discussion Paper No. 2157. Available at SSRN: https://ssrn.com/abstract=3309776 or http://dx.doi.org/10.2139/ssrn.3309776

Soheil Ghili (Contact Author)

Yale University ( email )

165 Whitney Avenue
New Haven, CT 06511
United States

Ehsan Kazemi

Independent ( email )

No Address Available

Amin Karbasi

Yale University ( email )

New Haven, CT 06520
United States

Register to save articles to
your library

Register

Paper statistics

Downloads
7
Abstract Views
85
PlumX Metrics