Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach

66 Pages Posted: 10 Apr 2020 Last revised: 3 Sep 2020

See all articles by Valérie Beaudouin

Valérie Beaudouin

Telecom Paris

Isabelle Bloch

Telecom Paris

David Bounie

Télécom Paris

Stéphan Clémençon

Télécom Paris

Florence d'Alché-Buc

Telecom Paris - Institut polytechnique de Paris

James Eagan

Telecom Paris - Institut polytechnique de Paris

Winston Maxwell

Telecom Paris - Institut Polytechnique de Paris

Pavlo Mozharovskyi

Telecom Paris - Institut polytechnique de Paris

Jayneel Parekh

Telecom Paris - Institut polytechnique de Paris

Date Written: March 23, 2020

Abstract

The recent enthusiasm for artificial intelligence (AI) is due principally to advances in deep learning. Deep learning methods are remarkably accurate, but also opaque, which limits their potential use in safety-critical applications. To achieve trust and accountability, designers and operators of machine learning algorithms must be able to explain the inner workings, the results and the causes of failures of algorithms to users, regulators, and citizens. The originality of this paper is to combine technical, legal and economic aspects of explainability to develop a framework for defining the "right" level of explainability in a given context. We propose three logical steps: First, define the main contextual factors, such as who the audience of the explanation is, the operational context, the level of harm that the system could cause, and the legal/regulatory framework. This step will help characterize the operational and legal needs for explanation, and the corresponding social benefits. Second, examine the technical tools available, including post hoc approaches (input perturbation, saliency maps...) and hybrid AI approaches. Third, as function of the first two steps, choose the right levels of global and local explanation outputs, taking into the account the costs involved. We identify seven kinds of costs and emphasize that explanations are socially useful only when total social benefits exceed costs.

Keywords: artificial intelligence, AI, explainability, interpretability, neural networks, hybrid AI, law, regulation, safety, liability, fairness, accountability, cost-benefit analysis

JEL Classification: K23, K13, D61, D63

Suggested Citation

Beaudouin, Valérie and Bloch, Isabelle and Bounie, David and Clémençon, Stéphan and d'Alché-Buc, Florence and Eagan, James and Maxwell, Winston and Mozharovskyi, Pavlo and Parekh, Jayneel, Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach (March 23, 2020). Available at SSRN: https://ssrn.com/abstract=3559477 or http://dx.doi.org/10.2139/ssrn.3559477

Valérie Beaudouin

Telecom Paris ( email )

19 Place Marguerite Perey
Palaiseau, 91120
France

Isabelle Bloch

Telecom Paris ( email )

19 Place Marguerite Perey
Palaiseau, 91120
France

David Bounie

Télécom Paris ( email )

19 Place Marguerite Perey
Palaiseau, 91120
France

Stéphan Clémençon

Télécom Paris

19 Place Marguerite Perey
Palaiseau, 91120
France

Florence D'Alché-Buc

Telecom Paris - Institut polytechnique de Paris ( email )

19 Place Marguerite Perey
Palaiseau, 91120
France

James Eagan

Telecom Paris - Institut polytechnique de Paris ( email )

19 Place Marguerite Perey
Palaiseau, 91120
France

Winston Maxwell (Contact Author)

Telecom Paris - Institut Polytechnique de Paris ( email )

19 place Marguerite Perey
Palaiseau, 91120
France

HOME PAGE: http://telecom-paris.fr

Pavlo Mozharovskyi

Telecom Paris - Institut polytechnique de Paris ( email )

19 Place Marguerite Perey
Palaiseau, 91120
France

Jayneel Parekh

Telecom Paris - Institut polytechnique de Paris ( email )

19 Place Marguerite Perey
Palaiseau, 91120
France

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
533
Abstract Views
2,563
Rank
108,764
PlumX Metrics