Download this Paper Open PDF in Browser

Accountability of AI Under the Law: The Role of Explanation

16 Pages Posted: 6 Nov 2017 Last revised: 1 Dec 2017

Finale Doshi-Velez

Harvard University - Harvard School of Engineering and Applied Sciences

Mason Kortz

Harvard University - Berkman Klein Center for Internet & Society; Harvard Law School

Ryan Budish

Harvard University - Berkman Klein Center for Internet & Society

Christopher Bavitz

Berkman Center for Internet & Society

Samuel J. Gershman

Harvard University

David O'Brien

Harvard University - Berkman Klein Center for Internet & Society

Stuart Shieber

Harvard University - Harvard School of Engineering and Applied Sciences

Jim Waldo

Harvard University; Harvard University - HarvardX

David Weinberger

Harvard University - Berkman Klein Center for Internet & Society

Alexandra Wood

Harvard University - Berkman Klein Center for Internet & Society

Date Written: November 3, 2017

Abstract

The ubiquity of systems using artificial intelligence or "AI" has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before|applications range from clinical decision support to autonomous driving and predictive policing. That said, common sense reasoning [McCarthy, 1960] remains one of the holy grails of AI, and there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014].

There are many ways to hold AI systems accountable. In this work, we focus on one: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems.

Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the different contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exists important consistencies: when demanding explanation from humans, what we typically want to know is how and whether certain input factors affected the final decision or outcome.

These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should often be technically feasible but may sometimes be practically onerous|there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard.

Keywords: Artificial Intelligence, Algorithms, Machine Learning, Explanation, Cyberlaw, Transparency

Suggested Citation

Doshi-Velez, Finale and Kortz, Mason and Budish, Ryan and Bavitz, Christopher and Gershman, Samuel J. and O'Brien, David and Shieber, Stuart and Waldo, Jim and Weinberger, David and Wood, Alexandra, Accountability of AI Under the Law: The Role of Explanation (November 3, 2017). Berkman Center Research Publication Forthcoming. Available at SSRN: https://ssrn.com/abstract=3064761

Finale Doshi-Velez (Contact Author)

Harvard University - Harvard School of Engineering and Applied Sciences ( email )

29 Oxford Street
Cambridge, MA 02138
United States

Mason Kortz

Harvard University - Berkman Klein Center for Internet & Society ( email )

Harvard Law School
23 Everett, 2nd Floor
Cambridge, MA 02138
United States
6174952845 (Phone)

Harvard Law School ( email )

1575 Massachusetts
Hauser 406
Cambridge, MA 02138
United States
6174952845 (Phone)

Ryan Budish

Harvard University - Berkman Klein Center for Internet & Society ( email )

Harvard Law School
23 Everett, 2nd Floor
Cambridge, MA 02138
United States
6173849108 (Phone)

HOME PAGE: http://cyber.law.harvard.edu/people/rbudish

Christopher Bavitz

Berkman Center for Internet & Society ( email )

Harvard Law School
23 Everett, 2nd Floor
Cambridge, MA 02138
United States

Samuel J. Gershman

Harvard University ( email )

1875 Cambridge Street
Cambridge, MA 02138
United States

David O'Brien

Harvard University - Berkman Klein Center for Internet & Society ( email )

Harvard Law School
23 Everett, 2nd Floor
Cambridge, MA 02138
United States

Stuart Shieber

Harvard University - Harvard School of Engineering and Applied Sciences ( email )

29 Oxford Street
Cambridge, MA 02138
United States

Jim Waldo

Harvard University ( email )

1875 Cambridge Street
Cambridge, MA 02138
United States

Harvard University - HarvardX ( email )

125 Mt Auburn St.
Cambridge, MA 02476
United States
7814420497 (Phone)

David Weinberger

Harvard University - Berkman Klein Center for Internet & Society ( email )

Harvard Law School
23 Everett, 2nd Floor
Cambridge, MA 02138
United States

Alexandra Wood

Harvard University - Berkman Klein Center for Internet & Society ( email )

Harvard Law School
23 Everett, 2nd Floor
Cambridge, MA 02138
United States

Paper statistics

Downloads
272
Rank
97,238
Abstract Views
580