Opening the Black Box: An Interpretability Evaluation Framework Using Land-Use Probing Tasks to Understand Deep Learning Traffic Prediction Models

31 Pages Posted: 14 May 2022

See all articles by Xuehao Zhai

Xuehao Zhai

Imperial College London

Fangce Guo

Imperial College London

Aruna Sivakumar

Imperial College London - Department of Civil and Environmental Engineering

Abstract

Emerging Machine Learning-based models are increasingly being deployed in critical traffic prediction problems. Due to the ‘black box’ nature of these models, there is an increasing emphasis on understanding their working process and analysing the physical meaning of their hidden layers. Most existing academic studies on Deep Learning (DL) tools in transportation engineering, however, are over-focusing on statistical accuracy, while little attention has been paid to analysing model interpretability. To address this limitation, we propose a novel two-step interpretability evaluation framework with land-use Probing Tasks to facilitate understanding the internal functions of a given DL-based traffic prediction model. This framework is built to understand if one or several hidden layers inside a DL-based traffic prediction model can spontaneously capture the spatial characteristics and if such spatial characteristics significantly affect the predicted outcome. In the case study, the interpretability of three types of self-supervised pre-training models (i.e., Auto-encoders, Variational Auto-encoders and Graph Convolutional Auto-encoders) is evaluated within the proposed framework. These DL models are trained and tested using parking and land-use data collected from the San Francisco Bay Area, California. The results of the experiments justify the reasonability of the proposed framework from both a global and a local perspective. All the land use indexes used in the testing are spontaneously captured in the hidden layers and the indexes of office and retail land use are the main contributors to prediction accuracy. Based on the measurements of spatial interpretability from different dimensions, we also provide a hidden layer-level post-hoc explanation for why the Graph Convolutional Network outperforms traditional DL-based models.

Keywords: Traffic prediction, Deep learning, Interpretability evaluation, land-use Probing tasks, Partial Dependence Plot

Suggested Citation

Zhai, Xuehao and Guo, Fangce and Sivakumar, Aruna, Opening the Black Box: An Interpretability Evaluation Framework Using Land-Use Probing Tasks to Understand Deep Learning Traffic Prediction Models. Available at SSRN: https://ssrn.com/abstract=4109859 or http://dx.doi.org/10.2139/ssrn.4109859

Xuehao Zhai (Contact Author)

Imperial College London ( email )

South Kensington Campus
Exhibition Road
London, SW7 2AZ
United Kingdom

Fangce Guo

Imperial College London ( email )

South Kensington Campus
Exhibition Road
London, SW7 2AZ
United Kingdom

Aruna Sivakumar

Imperial College London - Department of Civil and Environmental Engineering ( email )

Exhibition Road
London SW7 2AZ
United Kingdom

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
48
Abstract Views
197
PlumX Metrics