The Intuitive Appeal of Explainable Machines

55 Pages Posted: 2 Mar 2018 Last revised: 6 Dec 2018

See all articles by Andrew D. Selbst

Andrew D. Selbst

UCLA School of Law

Solon Barocas

Microsoft Research; Cornell University

Abstract

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.

In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.

Keywords: algorithmic accountability, explanations, law and technology, machine learning, big data, privacy, discrimination

Suggested Citation

Selbst, Andrew D. and Barocas, Solon, The Intuitive Appeal of Explainable Machines. 87 Fordham Law Review 1085 (2018), Available at SSRN: https://ssrn.com/abstract=3126971 or http://dx.doi.org/10.2139/ssrn.3126971

Solon Barocas

Microsoft Research

300 Lafayette Street
New York, NY 10012
United States

Cornell University ( email )

Ithaca, NY 14853
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
4,302
Abstract Views
19,036
Rank
4,893
PlumX Metrics