Understanding Explainability and Interpretability for Risk Science Applications
14 Pages Posted: 8 Dec 2023
Abstract
Recent adoption of advanced technologies, such as related to artificial intelligence (AI) methods, shows expanding potential for risk and safety applications. However, these technologies are complex such that even the programmers may have a limited understanding of how the models work. The field of explainability is increasingly being explored for the purpose of explaining models and model results to various stakeholders. Similarly, interpretability is being explored to describe the reasoning behind model predictions and decisions. This paper studies the concepts of explainability and interpretability in a risk and risk science context, with focus on risk assessment, risk management, and risk communication. The main purpose of the paper is to show that there is a potential for using these concepts and related knowledge fields to enhance risk science and its applications. The discussion is illustrated using examples from the context of autonomous vehicles. This paper will be of interest to risk analysts, policymakers, and other stakeholders who foresee a larger influence of advanced technologies on risk assessment, risk management, and risk communication.
Keywords: Explainability, interpretability, risk science, Machine Learning, Artificial Intelligence
Suggested Citation: Suggested Citation