Societal Biases Reinforcement Through Machine Learning – A Credit Scoring Perspective

To appear in AI and Ethics

14 Pages Posted: 9 Jul 2020 Last revised: 2 Nov 2020

See all articles by Bertrand Hassani

Bertrand Hassani

Université Paris I Panthéon-Sorbonne; University College London - Department of Computer Science

Date Written: June 12, 2020

Abstract

Does machine learning and AI ensure that social biases thrive ? This paper aims to analyse this issue. Indeed, as algorithms are informed by data, if these are corrupted, from a social bias perspective, good machine learning algorithms would learn from the data provided and reverberate the patterns learnt on the predictions related to either the classification or the regression intended. In other words, the way society behaves whether positively or negatively, would necessarily be reflected by the models. In this paper, we analyse how social biases are transmitted from the data into banks loan approvals by predicting either the gender or the ethnicity of the customers using the exact same information provided by customers through their applications.

Keywords: SMOTE, Machine Learning, Social Bias, Credit Scoring, Random Forest

JEL Classification: C60, C80, G21, G41

Suggested Citation

Hassani, Bertrand, Societal Biases Reinforcement Through Machine Learning – A Credit Scoring Perspective (June 12, 2020). To appear in AI and Ethics, Available at SSRN: https://ssrn.com/abstract=3625691 or http://dx.doi.org/10.2139/ssrn.3625691

Bertrand Hassani (Contact Author)

Université Paris I Panthéon-Sorbonne ( email )

17, rue de la Sorbonne
Paris, IL 75005
France

University College London - Department of Computer Science ( email )

United Kingdom

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
34
Abstract Views
280
PlumX Metrics