Varieties of AI Explanations under the Law. From the GDPR to the AIA, and Beyond
in: Holzinger, Goebel, Fong, Moon, Müller and Samek (eds.), Lecture Notes on Artificial Intelligence 13200: xxAI - beyond explainable AI, Springer, 2022
35 Pages Posted: 28 Aug 2021 Last revised: 10 May 2022
Date Written: August 25, 2021
Abstract
The quest to explain the output of artificial intelligence systems has clearly moved from a mere technical to a highly legally and politically relevant endeavor. In this paper, we provide an overview of legal obligations to explain AI and evaluate current policy proposals. In this, we distinguish between different functional varieties of AI explanations - such as multiple forms of enabling, technical and protective transparency - and show how different legal areas engage with and mandate such different types of explanations to varying degrees. Starting with the rights-enabling framework of the GDPR, we proceed to uncover technical and protective forms of explanations owed under contract, tort and banking law. Moreover, we discuss what the recent EU proposal for an Artificial Intelligence Act means for explainable AI, and review the proposal’s strengths and limitations in this respect. Finally, from a policy perspective, we advocate for moving beyond mere explainability towards a more encompassing framework for trustworthy and responsible AI that includes actionable explanations, values-in-design and co-design methodologies, interactions with algorithmic fairness, and quality benchmarking.
Keywords: Explainable AI, XAI, resposible AI, trustworthy AI, Artificial Intelligence Act, transparency, data protection law, banking law, contract law, tort law, product liability
JEL Classification: K12, K13
Suggested Citation: Suggested Citation