Regulating Explainable AI in the European Union. An Overview of the Current Legal Framework(s)
Liane Colonna/Stanley Greenstein (eds.), Nordic Yearbook of Law and Informatics 2020: Law in the Era of Artificial Intelligence
20 Pages Posted: 11 Aug 2021
Date Written: August 9, 2021
Abstract
Explainable Artificial Intelligence (XAI) is relevant not only for developers who want to understand how their system or model works in order to debug or improve it, but also for those affected by such technology. Determining why a system arrives at a particular algorithmic decision or prediction allows us to understand the technology, develop trust for it and – if the algorithmic outcome is illegal – initiate appropriate remedies against it. Additionally, XAI enables experts (and regulators) to review decisions or predictions and verify whether legal regulatory standards have been complied with. All of these points support the notion of opening the black box. On the other hand, there are a number of (legal) arguments against full transparency of Artificial Intelligence (AI) systems, especially in the interest of protecting trade secrets, national security and privacy.
Accordingly, this paper explores whether and to what extent individuals are, under EU law, entitled to a right to explanation of automated decision-making, especially when AI systems are used.
Keywords: Explainable AI, XAI, Black box, Algorithms, AI regulation, EU law, GDPR, rule of law
JEL Classification: K20
Suggested Citation: Suggested Citation