Significance, Relevance and Explainability in the Machine Learning Age: An Econometrics and Financial Data Science Perspective
Forthcoming, European Journal of Finance
14 Pages Posted: 15 Dec 2020
Date Written: September 30, 2020
Although machine learning is frequently associated with neural networks, it also comprises econometric regression approaches and other statistical technique whose accuracy enhances with increasing observation. What constitutes high quality machine learning is yet unclear though. Proponents of deep learning (i.e. neural networks) value computation efficiency over human interpretability and tolerate the “black box” appeal of their algorithms, whereas proponents of explainable artificial intelligence (xai) employ traceable “white box” methods (e.g. regressions) to enhance explainability to human decision makers. We extend Brooks et al.’s (2019) work on significance and relevance as assessment critieria in econometrics and financial data science to contribute to this debate. Specifically, we identify explainability as the Achilles heel of classic machine learning approaches such as neural networks, which are not fully replicable, lack transparency and traceability and therefore do not permit any attempts to establish causal inference. We conclude suggesting routes for future research to advance the design and efficiency of “white box” algorithms.
Keywords: explainability, explainable artificial intelligence (xai), neural networks, relevance, regressions, significance
JEL Classification: C40, C45, C58, C80, Y80
Suggested Citation: Suggested Citation