Non-Asimov Explanations Regulating AI Through Transparency

26 Pages Posted: 24 Nov 2021

See all articles by Chris Reed

Chris Reed

Queen Mary University of London, School of Law

Keri Grieman

Queen Mary University of London, School of Law; University of Oxford; The Alan Turing Institute

Joseph Early

University of Southampton - Department of Electronics and Computer Science; The Alan Turing Institute

Date Written: November 24, 2021

Abstract

An important part of law and regulation is demanding explanations for actual and potential failures. We ask questions like: What happened (or might happen) to cause this failure? And why did (or might) it happen? These are disguised normative questions – they really ask what ought to have happened, and how the humans involved ought to have behaved. If we ask the same questions about AI systems we run into two difficulties. The first is what might be described as the ‘black box’ problem, which lawyers have begun to investigate. Some modern AI systems are highly complex, so that even their makers might be unable to understand their workings fully, and thus answer the what and why questions. Technologists are beginning to work on this problem, aiming to use technology to explain the workings of autonomous systems more effectively, and also to produce autonomous systems which are easier to explain. But the second difficulty is so far underexplored, and is a more important one for law and regulation. This is that the kinds of explanation required by law and regulation are not, at least at first sight, the kinds of explanation which AI systems can currently provide. To answer the normative questions, law and regulation seeks a narrative explanation, a story. Humans usually explain their decisions and actions in narrative form (even if the work of psychologists and neuroscientists tells us that some of the explanations are devised ex post, and may not accurately reflect what went on in the human mind). At present, we seek these kinds of narrative explanation from AI technology, because as humans we seek to understand technology’s working through constructing a story to explain it. Our cultural history makes this inevitable – authors like Asimov, writing narratives about future AI technologies like intelligent robots, have told us that they act in ways explainable by the narrative logic which we use to explain human actions and so they can also be explained to us in those terms. This is, at least currently, not true. This chapter argues that we can only solve this problem by working from both sides. Technologists will need to find ways to tell us stories which law and regulation can use. But law and regulation will also need to accept different kinds of narratives, which tell stories about fundamental legal and regulatory concepts like fairness and reasonableness that are different from those we are used to.

Keywords: Artificial intelligence, AI, AI regulation, XAI, AI explanations, Black box explanations

JEL Classification: K00, K13, K29

Suggested Citation

Reed, Chris and Grieman, Keri and Early, Joseph, Non-Asimov Explanations Regulating AI Through Transparency (November 24, 2021). Queen Mary Law Research Paper No. 370/2021, Available at SSRN: https://ssrn.com/abstract=3970518

Chris Reed (Contact Author)

Queen Mary University of London, School of Law ( email )

67-69 Lincoln’s Inn Fields
London, WC2A 3JB
United Kingdom

Keri Grieman

Queen Mary University of London, School of Law ( email )

Mile End Road
Lincoln's Inn Fields
London, London E1 4NS
United Kingdom

University of Oxford ( email )

The Alan Turing Institute ( email )

British Library
96 Euston Road
London, NW1 2DB
United Kingdom

Joseph Early

University of Southampton - Department of Electronics and Computer Science ( email )

Southampton
United Kingdom

The Alan Turing Institute ( email )

British Library
96 Euston Road
London, NW1 2DB
United Kingdom

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
123
Abstract Views
887
Rank
438,101
PlumX Metrics