When Is an Algorithm Fair? Errors, Proxies, and Predictions in Algorithmic Decision Making

40 Pages Posted: 2 May 2018 Last revised: 5 Feb 2020

See all articles by Robert H. Sloan

Robert H. Sloan

University of Illinois at Chicago

Richard Warner

Chicago-Kent College of Law

Date Written: April 16, 2018

Abstract

The aliens land. Imagine beneficent aliens who come in peace. One of their first acts is to provide us with an analytics/artificial intelligence program that predicts future thought and action. Call the program AI, for alien intelligence. We have no explanation or understanding of why AI predicts what it does. Even the best human computer science experts find large parts of the AI program completely unintelligible. It appears to involve programming and statistical techniques unknown to us. Its predictions are more accurate than ours but, like ours, still have a fairly high error rate. Humans — businesses, governments, and individuals — embrace the program, and many (humans) propose using AI systematically in the widest possible range of contexts as a basis for prediction and action. We contend that it would be extremely unwise to do so. To the extent that human-created predictive systems are similar to AI, it is also unwise to use them across a wide range. We assume AI has three features, all shared to some extent with human-created systems. First, it is unintelligible. We cannot figure out how or why it reaches the conclusions it does. Second, like human-created systems, AI analyzes extensive data to detect statistical regularities that hold for people in certain categories. To do so, it abstracts from the contextually rich narratives that render people’s individual arcs through the world intelligible. This makes significant misclassification inevitable. Third, the aliens caution us that AI cannot detect its own misclassification. There are no feedback mechanisms that detect and correct errors. Given these features, it would be a serious mistake to use AI across the board as a basis for prediction and action. To begin with, AI will create winners and losers — a very large number of winners and losers since AI governs the widest possible range of contexts. Once negatively categorized, losers will face great difficulty in escaping the categorizations that condemn them to that role. The high error rate means that many of the categorizations are wrong, and the lack of feedback ensures that AI will not correct its errors. AI’s unintelligibility means that there is no way to explain to losers why such treatment is not capricious and arbitrary. Such a predictive system is both profoundly unjust and a serious threat to social stability. This raises three questions. To what extent are current human-created systems like AI? What can we ensure that our current systems do not have the objectionable features of AI? And, to what extent and in which cases should we forgo the use of our systems?

Keywords: AI, Artificial Intelligence, Predictive Systems, Big Data, Norms, Coordination Norms, Creating Norms, Norms and Regulation

JEL Classification: K20, D63, D83

Suggested Citation

Sloan, Robert H. and Warner, Richard, When Is an Algorithm Fair? Errors, Proxies, and Predictions in Algorithmic Decision Making (April 16, 2018). Available at SSRN: https://ssrn.com/abstract=3163664 or http://dx.doi.org/10.2139/ssrn.3163664

Robert H. Sloan

University of Illinois at Chicago ( email )

1200 W Harrison St
Chicago, IL 60607
United States

Richard Warner (Contact Author)

Chicago-Kent College of Law ( email )

565 West Adams St.
Chicago, IL 60661
United States

Here is the Coronavirus
related research on SSRN

Paper statistics

Downloads
137
Abstract Views
735
rank
240,410
PlumX Metrics