Insurance Discrimination and Fairness in Machine Learning: An Ethical Analysis

30 Pages Posted: 28 Aug 2019 Last revised: 14 Sep 2019

See all articles by Michele Loi

Michele Loi

University of Zurich

Markus Christen

University of Zurich

Date Written: August 17, 2019

Abstract

Here we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts and business managers. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive inaccuracy in underwriting are different from those of using predictive algorithms in other sectors (e.g. medical diagnosis, sentencing). Moreover, the computer science literature has demonstrated the existence of a trade-off in the extent to which one can pursue non- discrimination versus predictive accuracy. Again the moral assessment of this trade-off is related to the context of application.

Keywords: insurance, discrimination, big data, fairness in machine learning, ethics

Suggested Citation

Loi, Michele and Christen, Markus, Insurance Discrimination and Fairness in Machine Learning: An Ethical Analysis (August 17, 2019). Available at SSRN: https://ssrn.com/abstract=3438823 or http://dx.doi.org/10.2139/ssrn.3438823

Michele Loi (Contact Author)

University of Zurich ( email )

Rämistrasse 71
Zürich, CH-8006
Switzerland

Markus Christen

University of Zurich ( email )

Rämistrasse 71
Zürich, CH-8006
Switzerland

Register to save articles to
your library

Register

Paper statistics

Downloads
40
Abstract Views
229
PlumX Metrics