The Input Fallacy

86 Pages Posted: 5 May 2020 Last revised: 21 Apr 2021

Date Written: February 16, 2021

Abstract

Algorithmic credit pricing threatens to discriminate against protected groups. Traditionally, fair lending law has addressed such threats by scrutinizing inputs. But input scrutiny has become a fallacy in the world of algorithms.

Using a rich dataset of mortgages, I simulate algorithmic credit pricing and demonstrate that input scrutiny fails to address discrimination concerns and threatens to create an algorithmic myth of colorblindness. The ubiquity of correlations in big data combined with the flexibility and complexity of machine learning means that one cannot rule out the consideration of protected characteristics, such as race, even when one formally excludes them. Moreover, using inputs that include protected characteristics can in fact reduce disparate outcomes.

Nevertheless, the leading approaches to discrimination law in the algorithmic age continue to commit the input fallacy. These approaches suggest that we exclude protected characteristics and their proxies and limit algorithms to pre-approved inputs. Using my simulation exercise, I refute these approaches with new analysis. I demonstrate that they fail on their own terms, are unfeasible, and overlook the benefits of accurate prediction. These failures are particularly harmful to marginalized groups and individuals because they threaten to perpetuate their historical exclusion from credit and, thus, from a central avenue to greater prosperity and equality.

I argue that fair lending law must shift to outcome-focused analysis. When it is no longer possible to scrutinize inputs, outcome analysis provides the only way to evaluate whether a pricing method leads to impermissible disparities. This is true not only under the legal doctrine of disparate impact, which has always cared about outcomes, but also under the doctrine of disparate treatment, which has historically avoided examining disparate outcomes. Now, disparate treatment too can no longer rely on input scrutiny and must be considered through the lens of outcomes. I propose a new framework that regulatory agencies, such as the Consumer Financial Protection Bureau, can adopt to measure disparities and fight discrimination. This proposal charts an empirical course for antidiscrimination law in fair lending and also carries promise for other algorithmic contexts, such as criminal justice and employment.

Keywords: Fair Lending, FHA, ECOA, Discrimination, Artificial Intelligence, Credit Pricing, Machine Learning, Big Data, Algorithms, CFPB, HUD

Suggested Citation

Gillis, Talia B., The Input Fallacy (February 16, 2021). Minnesota Law Review, forthcoming 2022, Available at SSRN: https://ssrn.com/abstract=3571266 or http://dx.doi.org/10.2139/ssrn.3571266

Talia B. Gillis (Contact Author)

Columbia Law School ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
1,589
Abstract Views
6,521
Rank
23,601
PlumX Metrics