Comparing the Interpretability of Machine Learning Classifiers for Brain Tumour Survival Prediction

34 Pages Posted: 26 Jul 2022

See all articles by Colleen Elizabeth Charlton

Colleen Elizabeth Charlton

University of Edinburgh - Artificial Intelligence and its Applications Institute

Michale TC Poon

University of Edinburgh - Brain Tumour Centre of Excellence

Paul Brennan

University of Edinburgh - Brain Tumour Centre of Excellence; University of Edinburgh - Translational Neurosurgery

Jacques D. Fleuriot

University of Edinburgh - Artificial Intelligence and its Applications Institute

Abstract

Background and Objective: Prediction of survival in patients diagnosed with a brain tumour is challenging because of heterogeneous tumour behaviours and treatment response. Better estimations of prognosis would support treatment planning and patient support, and advances in machine learning have informed development of clinical predictive models, but due to the lack of model interpretability, integration into clinical practice is almost non-existent. Using a real-world dataset specifically available for this study, we compare five classification models with varying degrees of interpretability for the prediction of brain tumour survival greater than one year following diagnosis.

Methods: Three intrinsically interpretable ‘glass box’ classifiers (Bayesian Rule Lists, Explainable Boosting Machine, and Logistic Regression), and two ‘black box’ classifiers (Random Forest and Support Vector Machine) were trained on a novel brain tumour dataset for the prediction of one-year survival. All models were evaluated using standard performance metrics and model interpretability was quantified using both global and local explanation methods. The black box models, which do not provide means for intrinsic interpretability analysis, were evaluated globally using the Morris method and locally using LIME and SHAP.

Results: The Random Forest model marginally outperformed all other algorithms achieving 78.8% balanced accuracy. Across all models, diagnosis (tumour type), Karnofsky performance score, tumour morphology and first treatment were top contributors to the prediction of one year survival. The Explainable Boosting Machine demonstrated high balanced accuracy (77.7%) on-par with the black box models, while remaining fully interpretable. Post-hoc explanation methods, LIME and SHAP, were beneficial for understanding miss-classified predictions, however the instability of these explanations highlights the need for robust gold-standard interpretability methods.

Conclusion: Interpretable models are a natural choice for the domain of predictive medicine, and intrinsicially interpretable models, such as the Explainable Boosting Machine, may provide an advantage over traditional clinical assessment of brain tumour prognosis by weighting additional risk factors and stratifying patients accordingly. An agreement between model predictions and clinical knowledge is essential for establishing trust in the models decision making process, as well as trust that the model will make accurate predictions when applied to new data.

Note:
Funding Information: MTCP is supported by Cancer Research UK Brain Tumour Centre of Excellence Award (C157/A27589).

Conflict of Interests: The authors declare no conflict of interest.

Ethical Approval: Data collection was approved by Southeast Scotland Research Ethics Committee (reference 17/SS/0019).

Keywords: Bayesian rule lists, Explainable Boosting Machine, interpretable models, Machine learning, brain cancer, survival

Suggested Citation

Charlton, Colleen Elizabeth and Poon, Michale TC and Brennan, Paul and Fleuriot, Jacques D., Comparing the Interpretability of Machine Learning Classifiers for Brain Tumour Survival Prediction. Available at SSRN: https://ssrn.com/abstract=4164349 or http://dx.doi.org/10.2139/ssrn.4164349

Colleen Elizabeth Charlton (Contact Author)

University of Edinburgh - Artificial Intelligence and its Applications Institute ( email )

Michale TC Poon

University of Edinburgh - Brain Tumour Centre of Excellence

Paul Brennan

University of Edinburgh - Brain Tumour Centre of Excellence ( email )

Edinburgh
United Kingdom

University of Edinburgh - Translational Neurosurgery ( email )

Edinburgh
United Kingdom

Jacques D. Fleuriot

University of Edinburgh - Artificial Intelligence and its Applications Institute ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
63
Abstract Views
245
Rank
527,350
PlumX Metrics