Shapley-Lorenz Decompositions in eXplainable Artificial Intelligence

15 Pages Posted: 13 Mar 2020

Date Written: February 29, 2020

Abstract

Explainability of artificial intelligence models has become a crucial issue, especially in the most regulated fields, such as health and finance. In this paper, we provide a global explainable AI model which is based on Lorenz decompositions, thus extending previous contributions based on variance decompositions. This allows the resulting Shapley Lorenz decomposition to be more generally applicable, and provides a unifying variable importance criteria that combines predictive accuracy with explainability, using a normalised and easy to interpret metric. The proposed decomposition is illustrated within the context of a real financial problem: the prediction of bitcoin prices.

Keywords: Shapley values, Lorenz Zonoids, Predictive accuracy

JEL Classification: C45, C53, C58, G32

Suggested Citation

Giudici, Paolo and Raffinetti, Emanuela, Shapley-Lorenz Decompositions in eXplainable Artificial Intelligence (February 29, 2020). Available at SSRN: https://ssrn.com/abstract=3546773 or http://dx.doi.org/10.2139/ssrn.3546773

Paolo Giudici (Contact Author)

University of Pavia ( email )

Corso Strada Nuova, 65
27100 Pavia, 27100
Italy

Emanuela Raffinetti

University of Milan ( email )

Via Festa del Perdono, 7
Milan, 20122
Italy

Here is the Coronavirus
related research on SSRN

Paper statistics

Downloads
62
Abstract Views
238
rank
389,443
PlumX Metrics