Shapley-Lorenz Decompositions in eXplainable Artificial Intelligence

15 Pages Posted: 13 Mar 2020

See all articles by Paolo Giudici

Paolo Giudici

University of Pavia

Emanuela Raffinetti

University of Pavia; Department of Economics and Management

Date Written: February 29, 2020

Abstract

Explainability of artificial intelligence models has become a crucial issue, especially in the most regulated fields, such as health and finance. In this paper, we provide a global explainable AI model which is based on Lorenz decompositions, thus extending previous contributions based on variance decompositions. This allows the resulting Shapley Lorenz decomposition to be more generally applicable, and provides a unifying variable importance criteria that combines predictive accuracy with explainability, using a normalised and easy to interpret metric. The proposed decomposition is illustrated within the context of a real financial problem: the prediction of bitcoin prices.

Keywords: Shapley values, Lorenz Zonoids, Predictive accuracy

JEL Classification: C45, C53, C58, G32

Suggested Citation

Giudici, Paolo and Raffinetti, Emanuela, Shapley-Lorenz Decompositions in eXplainable Artificial Intelligence (February 29, 2020). Available at SSRN: https://ssrn.com/abstract=3546773 or http://dx.doi.org/10.2139/ssrn.3546773

Paolo Giudici (Contact Author)

University of Pavia ( email )

Via San Felice 7
27100 Pavia, 27100
Italy

Emanuela Raffinetti

University of Pavia ( email )

Via San Felice 5
Pavia, 27100
Italy

Department of Economics and Management

Italy

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
207
Abstract Views
691
Rank
235,278
PlumX Metrics