Explaining Algorithmic Decisions

4 GEO. L. TECH. REV. 711 (2020)

20 Pages Posted: 26 Feb 2020 Last revised: 24 Sep 2020

See all articles by Gabriel Nicholas

Gabriel Nicholas

New York University School of Law

Date Written: July 25, 2020

Abstract

Algorithmic systems are increasingly instrumental to how private and public actors make real world decisions. Often, the internal reasonings underlying these systems are opaque to the humans who use them. This Article gives a broad technical overview of how algorithms work and what tools exist to interrogate how they come to decisions. It is aimed at a non-technical audience and builds off of technical and ontological scholarship from the nascent field of explainable artificial intelligence (XAI). Part II defines and contextualizes the terms "algorithm" and "explanation." Part III proposes a hypothetical machine learning algorithm and explores how feature engineering and dimensionality affect the capacity for humans to understand how it works. Part IV looks at the unique explainability problems posed by systems that combine multiple opaque algorithms and the latest tools developed to address them.

Keywords: algorithmic decision-making, algorithmic transparency, algorithmic secrecy, algorithmic opacity, algorithmic accountability, algorithm, automated decision-making, information law, algorithmic explainability, explainable artificial intelligence, XAI

Suggested Citation

Nicholas, Gabriel, Explaining Algorithmic Decisions (July 25, 2020). 4 GEO. L. TECH. REV. 711 (2020), Available at SSRN: https://ssrn.com/abstract=3523456

Gabriel Nicholas (Contact Author)

New York University School of Law ( email )

40 Washington Square South
New York, NY NY 10012-1099
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
141
Abstract Views
1,208
rank
288,012
PlumX Metrics