The Proxy Problem: Fairness and Artificial Intelligence
7 Pages Posted: 24 Aug 2019 Last revised: 7 Nov 2019
Date Written: August 23, 2019
Developers of predictive systems use proxies when it is impossible or too expensive for them to directly observe attributes relevant to predictions they would like to make. Use of proxies has the consequence that one area of one’s life can have significant consequences for another seemingly disconnected area, and that raises concerns about fairness. Our starting point for addressing the fairness question is the economist John Roemer’s observation in Equality of Opportunity that a conception of “equality of opportunity . . . prevalent today in Western democracies . . . says that society should do what it can to ‘level the playing field’ among individuals who compete for positions.” Proxies raise novel “level playing field” questions. These questions raise issues of social justice that extend far beyond the focus discrimination against protected groups, the focus typical of work on AI and fairness. We outline an approach regulating proxies that assigns a regulatory role to (in the US system) a federal agency such as the Federal Trade Commission, and we show how mathematically expressed constraints on AI can play a role in such an approach.
Keywords: proxy variables, proxies, norms, coordination norms, equal opportunity, level playing field
Suggested Citation: Suggested Citation