Understanding Explainability and Interpretability for Risk Science Applications

14 Pages Posted: 8 Dec 2023

See all articles by Shital Thekdi

Shital Thekdi

University of Richmond

Terje Aven

University of Stavanger

Abstract

Recent adoption of advanced technologies, such as related to artificial intelligence (AI) methods, shows expanding potential for risk and safety applications. However, these technologies are complex such that even the programmers may have a limited understanding of how the models work. The field of explainability is increasingly being explored for the purpose of explaining models and model results to various stakeholders. Similarly, interpretability is being explored to describe the reasoning behind model predictions and decisions. This paper studies the concepts of explainability and interpretability in a risk and risk science context, with focus on risk assessment, risk management, and risk communication. The main purpose of the paper is to show that there is a potential for using these concepts and related knowledge fields to enhance risk science and its applications. The discussion is illustrated using examples from the context of autonomous vehicles. This paper will be of interest to risk analysts, policymakers, and other stakeholders who foresee a larger influence of advanced technologies on risk assessment, risk management, and risk communication.

Keywords: Explainability, interpretability, risk science, Machine Learning, Artificial Intelligence

Suggested Citation

Thekdi, Shital and Aven, Terje, Understanding Explainability and Interpretability for Risk Science Applications. Available at SSRN: https://ssrn.com/abstract=4658011 or http://dx.doi.org/10.2139/ssrn.4658011

Shital Thekdi (Contact Author)

University of Richmond ( email )

28 Westhampton Way
Richmond, VA 23173
United States

Terje Aven

University of Stavanger ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
45
Abstract Views
241
PlumX Metrics