The Intuitive Appeal of Explainable Machines

66 Pages Posted: 2 Mar 2018 Last revised: 16 Jun 2018

Andrew D. Selbst

Data & Society Research Institute; Yale Information Society Project

Solon Barocas

Cornell University

Date Written: February 19, 2018

Abstract

As algorithmic decision-making has become synonymous with inexplicable decision-making, we have become obsessed with opening the black box. This Article responds to a growing chorus of legal scholars and policymakers demanding explainable machines. Their instinct makes sense; what is unexplainable is usually unaccountable. But the calls for explanation are a reaction to two distinct but often conflated properties of machine-learning models: inscrutability and non intuitiveness. Inscrutability makes one unable to fully grasp the model, while non intuitiveness means one cannot understand why the model's rules are what they are. Solving inscrutability alone will not resolve law and policy concerns; accountability relates not merely to how models work, but whether they are justified.

In this Article, we first explain what makes models inscrutable as a technical matter. We then explore two important examples of existing regulation-by-explanation and techniques within machine learning for explaining inscrutable decisions. We show that while these techniques might allow machine learning to comply with existing laws, compliance will rarely be enough to assess whether decision-making rests on a justifiable basis.

We argue that calls for explainable machines have failed to recognize the connection between intuition and evaluation and the limitations of such an approach. A belief in the value of explanation for justification assumes that if only a model is explained, problems will reveal themselves intuitively. Machine learning, however, can uncover relationships that are both non-intuitive and legitimate, frustrating this mode of normative assessment. If justification requires understanding why the model's rules are what they are, we should seek explanations of the process behind a model's development and use, not just explanations of the model itself. This Article illuminates the explanation-intuition dynamic and offers documentation as an alternative approach to evaluating machine learning models.

Keywords: algorithmic accountability, explanations, law and technology, machine learning, big data, privacy, discrimination

Suggested Citation

Selbst, Andrew D. and Barocas, Solon, The Intuitive Appeal of Explainable Machines (February 19, 2018). Fordham Law Review, Forthcoming. Available at SSRN: https://ssrn.com/abstract=3126971 or http://dx.doi.org/10.2139/ssrn.3126971

Andrew D. Selbst (Contact Author)

Data & Society Research Institute ( email )

36 West 20th Street
11th Floor
New York,, NY 10011
United States

Yale Information Society Project ( email )

127 Wall Street
New Haven, CT 06511
United States

Solon Barocas

Cornell University ( email )

Ithaca, NY 14853
United States

Register to save articles to
your library

Register

Paper statistics

Downloads
1,022
rank
18,771
Abstract Views
4,547
PlumX