What's in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration

iCourts Working Paper Series No. 162, 2019

27 Pages Posted: 19 Jun 2019

See all articles by Henrik Palmer Olsen

Henrik Palmer Olsen

University of Copenhagen - iCourts - Centre of Excellence for International Courts

Jacob Livingston Slosser

University of Copenhagen - iCourts - Centre of Excellence for International Courts

Thomas Troels Hildebrandt

Software, Data, People & Society

Cornelius Wiesener

University of Copenhagen - iCourts - Centre of Excellence for International Courts

Date Written: June 12, 2019

Abstract

Every day, millions of administrative transactions take place. Insurance policies, credit appraisals, permit and welfare applications, to name a few, are created, invoked, and assessed. Though often treated as banalities of modern life, these transactions often carry significant importance. To the extent that such decisions are embodied in a governmental, administrative process, they must meet the requirements set out in administrative law, one of which being the requirement of explainability. Increasingly, many of these tasks are being fully or semi-automated through algorithmic decision making (ADM) systems. Fearing the opaqueness of the dreaded black box of these ADM systems, countless ethical guidelines have been produced for combatting the lack of computational transparency. Rather than adding yet another ethical framework to an already overcrowded ethics-based literature, we focus on a concrete legal approach, and ask: what does explainability actually require? Using a comparative approach, we investigate the extent to which such decisions may be made using computational tools and under what rubric their compatibility with the legal requirement of explainability can be examined. We assess what explainability actually demands with regard to both human and computer-aided decision-making and which recent legislative trends, if any, can be observed. We also critique the field’s unwillingness to apply the standard of explainability already enshrined in administrative law: the human standard. Finally, we introduce what we call the “administrative Turing test” which could be used to continually validate and strengthen AI-supported decision-making. With this approach, we provide a benchmark of explainability on which future applications of algorithmic decision-making can be measured in a broader European context, without creating an undue burden on its implementation.

Keywords: explainability, algorithmic decision making, administrative law, artificial intelligence, black box

Suggested Citation

Olsen, Henrik Palmer and Slosser, Jacob Livingston and Hildebrandt, Thomas Troels and Wiesener, Cornelius, What's in the Box? The Legal Requirement of Explainability in Computationally Aided Decision-Making in Public Administration (June 12, 2019). iCourts Working Paper Series No. 162, 2019. Available at SSRN: https://ssrn.com/abstract=3402974 or http://dx.doi.org/10.2139/ssrn.3402974

Henrik Palmer Olsen (Contact Author)

University of Copenhagen - iCourts - Centre of Excellence for International Courts ( email )

Studiestraede 6
Copenhagen, DK-1455
Denmark

Jacob Livingston Slosser

University of Copenhagen - iCourts - Centre of Excellence for International Courts ( email )

Studiestraede 6
Copenhagen, DK-1455
Denmark

Thomas Troels Hildebrandt

Software, Data, People & Society ( email )

studiestraede 6
copenhagen, 1455
Denmark

Cornelius Wiesener

University of Copenhagen - iCourts - Centre of Excellence for International Courts ( email )

Studiestraede 6
Copenhagen, DK-1455
Denmark

Register to save articles to
your library

Register

Paper statistics

Downloads
102
Abstract Views
463
rank
260,121
PlumX Metrics