Explaining eXplainable AI
46 Pages Posted: 8 Nov 2024 Last revised: 6 May 2025
Date Written: May 06, 2025
Abstract
“Black box” AI systems are proliferating in high-stakes legal contexts—from creditors making lending decisions, to parole boards and judges assessing carceral risks, to agencies determining eligibility for critical programs like Medicaid. Problems with relying on opaque machine learning systems for these decisions—including inaccuracy, bias, and misalignment—are well understood by policymakers, legal scholars, and technical experts. To counter the risks of black-box decision-making, policymakers and regulators have increasingly turned to a familiar safeguard: the right to an explanation. Finding a path to “eXplainable AI” (XAI) that balances legal requirements with technical realities, however, remain stubbornly elusive.
This Article offers a solution to this dilemma by connecting law, computer science, and behavioral science to give pragmatic substance to the right to algorithmic explanations. This Article provides a novel framework that maps technical XAI tools onto legal requirements for explanations. The Article then introduces the “explainy” software package, which was created by the authors as a tool to empirically assess the suitability of these methods for fulfilling legal goals.
The central insight driving this framework is that people should be at the center the algorithmic decision-making discourse. The choice of algorithmic explanation should be guided by how the intended audience will understand it, rather than exclusively focused on the needs of developers or superficial compliance with the law. Questions about the level of detail, complexity, and format of algorithmic explanations have vexed policymakers and developers alike. By crafting explanations that are legally relevant, technically feasible, and behaviorally sound, we can finally achieve “Legal XAI.”
Suggested Citation: Suggested Citation