Financial Inclusion as a Fairness Criterion in Credit Risk Assessment
9 Pages Posted: 25 Jun 2020
Date Written: March 25, 2020
Abstract
With loan origination becoming almost exclusively algorithmic in developed countries such as USA, credit risk assessment algorithms are being critically analyzed for bias against certain groups of the population. Newer prediction systems are measured against older statistical models in the number of loans approved and the pricing of loans across different protected groups. Other research has contextualized these systems within existing consumer protection frameworks to highlight legal constraints on algorithmic decisions. Implicit in existing research on fairness of credit risk assessment systems is the boundary of a formal credit market. While concerns around bias of prediction models against protected classes extend to emerging economies, the counterfactual against which the performance of these systems should also be measured is an informal lending ecosystem notorious for predatory lending. The comparison is further complicated in that accessing formal credit via the newly emerging digital lending platforms involves individuals sharing personal information such as location history and phone contact details. This paper presents a framework based on decision theory, to expand existing discussions on fairness of credit risk assessment systems to accommodate trade-offs confronted in emerging economies, by making explicit the cost of participation in these systems. The role of digital lending apps in formal credit markets in India and the United States is contrasted to broadly outline distinctions in the role and adoption of newer credit modeling techniques between developed and developing countries.
Keywords: Credit Risk Assessment, Right to explanation, Algorithmic Fairness, financial inclusion, digital lending
Suggested Citation: Suggested Citation