The Input Fallacy

86 Pages Posted: 5 May 2020 Last revised: 21 Apr 2021

Date Written: February 16, 2021


Algorithmic credit pricing threatens to discriminate against protected groups. Traditionally, fair lending law has addressed such threats by scrutinizing inputs. But input scrutiny has become a fallacy in the world of algorithms.

Using a rich dataset of mortgages, I simulate algorithmic credit pricing and demonstrate that input scrutiny fails to address discrimination concerns and threatens to create an algorithmic myth of colorblindness. The ubiquity of correlations in big data combined with the flexibility and complexity of machine learning means that one cannot rule out the consideration of protected characteristics, such as race, even when one formally excludes them. Moreover, using inputs that include protected characteristics can in fact reduce disparate outcomes.

Nevertheless, the leading approaches to discrimination law in the algorithmic age continue to commit the input fallacy. These approaches suggest that we exclude protected characteristics and their proxies and limit algorithms to pre-approved inputs. Using my simulation exercise, I refute these approaches with new analysis. I demonstrate that they fail on their own terms, are unfeasible, and overlook the benefits of accurate prediction. These failures are particularly harmful to marginalized groups and individuals because they threaten to perpetuate their historical exclusion from credit and, thus, from a central avenue to greater prosperity and equality.

I argue that fair lending law must shift to outcome-focused analysis. When it is no longer possible to scrutinize inputs, outcome analysis provides the only way to evaluate whether a pricing method leads to impermissible disparities. This is true not only under the legal doctrine of disparate impact, which has always cared about outcomes, but also under the doctrine of disparate treatment, which has historically avoided examining disparate outcomes. Now, disparate treatment too can no longer rely on input scrutiny and must be considered through the lens of outcomes. I propose a new framework that regulatory agencies, such as the Consumer Financial Protection Bureau, can adopt to measure disparities and fight discrimination. This proposal charts an empirical course for antidiscrimination law in fair lending and also carries promise for other algorithmic contexts, such as criminal justice and employment.

Keywords: Fair Lending, FHA, ECOA, Discrimination, Artificial Intelligence, Credit Pricing, Machine Learning, Big Data, Algorithms, CFPB, HUD

Suggested Citation

Gillis, Talia B., The Input Fallacy (February 16, 2021). Minnesota Law Review, forthcoming 2022, Available at SSRN: or

Talia B. Gillis (Contact Author)

Columbia Law School ( email )

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
PlumX Metrics