Explaining Algorithmic Decisions
4 GEO. L. TECH. REV. 711 (2020)
20 Pages Posted: 26 Feb 2020 Last revised: 24 Sep 2020
Date Written: July 25, 2020
Abstract
Algorithmic systems are increasingly instrumental to how private and public actors make real world decisions. Often, the internal reasonings underlying these systems are opaque to the humans who use them. This Article gives a broad technical overview of how algorithms work and what tools exist to interrogate how they come to decisions. It is aimed at a non-technical audience and builds off of technical and ontological scholarship from the nascent field of explainable artificial intelligence (XAI). Part II defines and contextualizes the terms "algorithm" and "explanation." Part III proposes a hypothetical machine learning algorithm and explores how feature engineering and dimensionality affect the capacity for humans to understand how it works. Part IV looks at the unique explainability problems posed by systems that combine multiple opaque algorithms and the latest tools developed to address them.
Keywords: algorithmic decision-making, algorithmic transparency, algorithmic secrecy, algorithmic opacity, algorithmic accountability, algorithm, automated decision-making, information law, algorithmic explainability, explainable artificial intelligence, XAI
Suggested Citation: Suggested Citation
