Explaining eXplainable AI

46 Pages Posted: 8 Nov 2024 Last revised: 6 May 2025

See all articles by Aniket Kesari

Aniket Kesari

Fordham University School of Law

Daniela Sele

Center for Law & Economics, ETH Zurich

Elliott Ash

ETH Zürich

Stefan Bechtold

ETH Zürich

Date Written: May 06, 2025

Abstract

“Black box” AI systems are proliferating in high-stakes legal contexts—from creditors making lending decisions, to parole boards and judges assessing carceral risks, to agencies determining eligibility for critical programs like Medicaid. Problems with relying on opaque machine learning systems for these decisions—including inaccuracy, bias, and misalignment—are well understood by policymakers, legal scholars, and technical experts. To counter the risks of black-box decision-making, policymakers and regulators have increasingly turned to a familiar safeguard: the right to an explanation. Finding a path to “eXplainable AI” (XAI) that balances legal requirements with technical realities, however, remain stubbornly elusive.


This Article offers a solution to this dilemma by connecting law, computer science, and behavioral science to give pragmatic substance to the right to algorithmic explanations. This Article provides a novel framework that maps technical XAI tools onto legal requirements for explanations. The Article then introduces the “explainy” software package, which was created by the authors as a tool to empirically assess the suitability of these methods for fulfilling legal goals.


The central insight driving this framework is that people should be at the center the algorithmic decision-making discourse. The choice of algorithmic explanation should be guided by how the intended audience will understand it, rather than exclusively focused on the needs of developers or superficial compliance with the law. Questions about the level of detail, complexity, and format of algorithmic explanations have vexed policymakers and developers alike. By crafting explanations that are legally relevant, technically feasible, and behaviorally sound, we can finally achieve “Legal XAI.”

Suggested Citation

Kesari, Aniket and Sele, Daniela and Ash, Elliott and Bechtold, Stefan,
Explaining eXplainable AI
(May 06, 2025). Available at SSRN: https://ssrn.com/abstract=4972085 or http://dx.doi.org/10.2139/ssrn.4972085

Aniket Kesari

Fordham University School of Law ( email )

140 West 62nd Street
New York, NY 10023
United States

Daniela Sele

Center for Law & Economics, ETH Zurich ( email )

Zürichbergstrasse 18
8092 Zurich, CH-1015
Switzerland

HOME PAGE: http://https://lawecon.ethz.ch/group/senior-scientists/sele.html

Elliott Ash

ETH Zürich ( email )

Rämistrasse 101
ZUE F7
Zürich, 8092
Switzerland

Stefan Bechtold (Contact Author)

ETH Zürich ( email )

IFW E 47.2
Zurich, 8092
Switzerland
+41-44-632-2670 (Phone)

HOME PAGE: http://www.ip.ethz.ch/people/bechtold

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
248
Abstract Views
1,368
Rank
270,061
PlumX Metrics