Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach
43 Pages Posted: 3 Dec 2020 Last revised: 4 Dec 2020
Date Written: October 21, 2020
Abstract
Businesses increasingly rely on algorithms that are data-trained sets of decision rules in order to implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” Our contention is that we can address much of the problem of algorithmic transparency by rethinking the right to informed consent in the age of artificial intelligence. It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and complete transaction with individual autonomy as its moral foundation. Such a view is insufficient, especially when data is used in a secondary, non-contextual, and unpredictable manner—which is the inescapable nature of advanced AI systems. We submit that an alternative view of informed consent—as an assurance of trust for incomplete transactions—allows for an understanding of why the rationale of informed consent already entails a right to ex post explanation.
Keywords: a right to explanation, artificial intelligence ethics, explainable AI (XAI), online privacy, California Consumer Privacy Act (CCPA), General Data Protection Regulation (GDPR).
Suggested Citation: Suggested Citation