The Fairness of Credit Scoring Models
46 Pages Posted: 18 Feb 2021 Last revised: 27 Apr 2021
Date Written: April 27, 2021
In credit markets, screening algorithms discriminate between good-type and bad-type borrowers. This is their raison d’être. However, by doing so, they also often discriminate between individuals sharing a protected attribute (e.g. gender, age, race) and the rest of the population. In this paper, we show how to test (1) whether there exists a statistical significant difference in terms of rejection rates or interest rates, called lack of fairness, between protected and unprotected groups and (2) whether this difference is only due to credit worthiness. When condition (2) is not met, the screening algorithm does not comply with the fair-lending principle and can be qualified as illegal. Our framework provides guidance on how algorithmic fairness can be monitored by lenders, controlled by their regulators, and improved for the benefit of protected groups.
Keywords: Fairness; Credit scoring models; Discrimination; Machine Learning; Artificial Intelligence
JEL Classification: G21, G29, C10, C38, C55
Suggested Citation: Suggested Citation