Lost in Translation: The Limits of Explainability in AI

42 Cardozo Arts & Ent. L.J. 391 (2024)

49 Pages Posted: 7 Aug 2023

See all articles by Hofit Wasserman-Rozen

Hofit Wasserman-Rozen

Tel Aviv University

Ran Gilad-Bachrach

Tel Aviv University

Niva Elkin-Koren

Tel-Aviv University - Faculty of Law

Date Written: August 4, 2023

Abstract

As artificial intelligence becomes more prevalent, regulators are increasingly turning to legal measures, like “a right to explanation” to protect against potential risks raised by AI systems. However, are eXplainable AI (XAI) tools - the artificial intelligence tools that provide such explanations – up for the task?

This paper critically examines XAI’s potential to facilitate the right to explanation by applying the prism of explanation’s role in law to different stakeholders. Inspecting the underlying functions of reason-giving reveals different objectives for each of the stakeholders involved. From the perspective of a decision-subject, reason-giving facilitates due process and acknowledges human agency. From a decision-maker’s perspective, reason-giving contributes to improving the quality of the decisions themselves. From an ecosystem perspective, reason-giving may strengthen the authority of the decision-making system toward different stakeholders by promoting accountability and legitimacy, and by providing better guidance. Applying this analytical framework to XAI’s generated explanations reveals that XAI fails to fulfill the underlying objectives of the right to explanation from the perspective of both the decision-subject and the decision-maker. In contrast, XAI is found to be extremely well-suited to fulfil the underlying functions of reason-giving from an ecosystems’ perspective, namely, strengthening the authority of the decision-making system. However, lacking all other virtues, this isolated ability may be misused or abused, eventually harming XAI’s intended human audience. The disparity between human decision-making and automated decisions makes XAI an insufficient and even a risky tool, rather than serving as a guardian of human rights. After conducting a rigorous analysis of these ramifications, this paper concludes by urging regulators and the XAI community to reconsider the pursuit of explainability and the right to explanation of AI systems.

Keywords: Explainability, XAI, AI, Legal Decision-Making, Law and Technology

Suggested Citation

Wasserman-Rozen, Hofit and Gilad-Bachrach, Ran and Elkin-Koren, Niva, Lost in Translation: The Limits of Explainability in AI (August 4, 2023). 42 Cardozo Arts & Ent. L.J. 391 (2024), Available at SSRN: https://ssrn.com/abstract=4531323 or http://dx.doi.org/10.2139/ssrn.4531323

Hofit Wasserman-Rozen (Contact Author)

Tel Aviv University ( email )

Tel-Aviv
Israel

Ran Gilad-Bachrach

Tel Aviv University ( email )

Tel-Aviv
Israel

Niva Elkin-Koren

Tel-Aviv University - Faculty of Law ( email )

Ramat Aviv
Tel Aviv, 6997801
Israel

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
385
Abstract Views
986
Rank
149,090
PlumX Metrics