False Dreams of Algorithmic Fairness: The Case of Credit Pricing

93 Pages Posted: 5 May 2020

Date Written: February 18, 2020

Abstract

Credit pricing is changing. Traditionally, lenders priced consumer credit by using a small set of borrower and loan characteristics, sometimes with the assistance of loan officers. Today, lenders increasingly use big data and advanced prediction technologies, such as machine-learning, to set the terms of credit. These modern underwriting practices could increase prices for protected groups, potentially giving rise to violations of anti-discrimination laws.

What is not new is the concern that personalized credit pricing relies on characteristics or inputs that reflect preexisting discrimination or disparities. Fair lending law has traditionally addressed this concern through input scrutiny, either by limiting the consideration of protected characteristics or by attempting to isolate inputs that cause disparities.

But input scrutiny is no longer effective. Using data on past mortgages, I simulate algorithmic credit pricing and demonstrate that input scrutiny fails to address discrimination concerns. The ubiquity of correlations in big data combined with the flexibility and complexity of machine-learning means that one cannot rule out the consideration of a protected characteristic even when formally excluded. Similarly, in the machine-learning context, it may be impossible to determine which inputs drive disparate outcomes.

Despite these fundamental changes, prominent approaches to applying discrimination law in the algorithmic age continue to embrace the input-centered approach of traditional law. These approaches suggest that we exclude protected characteristics and their proxies, limit algorithms to pre-approved inputs, and use statistical methods to neutralize the effect of protected characteristics. Using my simulation exercise, I demonstrate that these approaches fail on their own terms, are likely unfeasible, and overlook the benefits of accurate prediction.

I argue that the shortcomings of current approaches mean that fair lending law must make the necessary, though uncomfortable, shift to outcome-focused analysis. When it is no longer possible to scrutinize inputs, outcome analysis provides a way to evaluate whether a pricing method leads to impermissible disparities. This is true not only under the legal doctrine of disparate impact, which has always cared about outcomes, but also, under the doctrine of disparate treatment, which historically has avoided examining disparate outcomes. Now, disparate treatment too can no longer rely on input scrutiny and must be considered through the lens of outcomes. I propose a new framework that regulatory agencies, such as the Consumer Financial Protection Bureau, can adopt to measure the disparities created when moving to an algorithmic world, enabling an explicit analysis of the trade-off between prediction accuracy and other policy goals.

Keywords: Fair Lending, FHA, ECOA, Discrimination, Artificial Intelligence, Credit Pricing, Machine Learning, Big Data, Algorithms, CFPB, HUD

Suggested Citation

Gillis, Talia B., False Dreams of Algorithmic Fairness: The Case of Credit Pricing (February 18, 2020). Available at SSRN: https://ssrn.com/abstract=3571266 or http://dx.doi.org/10.2139/ssrn.3571266

Talia B. Gillis (Contact Author)

Harvard University, Law School ( email )

1563 Massachusetts Avenue
Cambridge, MA 02138
United States

Here is the Coronavirus
related research on SSRN

Paper statistics

Downloads
99
Abstract Views
400
rank
287,988
PlumX Metrics