Algorithms, Correcting Biases

Forthcoming, Social Research

9 Pages Posted: 13 Dec 2018

See all articles by Cass R. Sunstein

Cass R. Sunstein

Harvard Law School; Harvard University - Harvard Kennedy School (HKS)

Date Written: December 12, 2018

Abstract

A great deal of theoretical work explores the possibility that algorithms may be biased in one or another respect. But for purposes of law and policy, some of the most important empirical research finds exactly the opposite. In the context of bail decisions, an algorithm designed to predict flight risk does much better than human judges, in large part because the latter place an excessive emphasis on the current offense. Current Offense Bias, as we might call it, is best seen as a cousin of “availability bias,” a well-known source of mistaken probability judgments. The broader lesson is that well-designed algorithms should be able to avoid cognitive biases of many kinds. Existing research on bail decisions also casts a new light on how to think about the risk that algorithms will discriminate on the basis of race (or other factors). Algorithms can easily be designed so as to avoid taking account of race (or other factors). They can also be constrained so as to produce whatever kind of racial balance is sought, and thus to reveal tradeoffs among various social values.

Suggested Citation

Sunstein, Cass R., Algorithms, Correcting Biases (December 12, 2018). Forthcoming, Social Research. Available at SSRN: https://ssrn.com/abstract=3300171

Cass R. Sunstein (Contact Author)

Harvard Law School ( email )

1575 Massachusetts Ave
Areeda Hall 225
Cambridge, MA 02138
United States
617-496-2291 (Phone)

Harvard University - Harvard Kennedy School (HKS) ( email )

79 John F. Kennedy Street
Cambridge, MA 02138
United States

Register to save articles to
your library

Register

Paper statistics

Downloads
1,091
rank
18,395
Abstract Views
6,468
PlumX Metrics