Towards Explaining Deep Learning: Significance Tests for Multi-Layer Perceptrons

33 Pages Posted: 3 Apr 2020

Date Written: March 12, 2020


Horel and Giesecke (2019) (HG) propose a gradient-based test statistic for the one-layer sigmoid neural networks and study its asymptotics using nonparametric techniques. However, their results are not adequate for the most useful and attractive architectures of neural networks, e.g., multi-layer perceptrons. This paper extends their results to the fully connected feed-forward neural networks where the activation function is a general bounded, non-polynomial, and infinitely differentiable (smooth). To derive the test statistic and its asymptotic distribution, we provide the consistency of the multi-layer perceptrons via the method of sieves. Like HG the significant test offers several desirable characteristics such as computational efficiency and ranking variables based on their influence, but, unlike HG the significant test can be used for the multi-layer perceptrons. To validate the theoretical results, a Monte Carlo analysis is presented.

Keywords: Neural Networks, Significant Test, Asymptotic Distribution

JEL Classification: C1, C5

Suggested Citation

Fallahgoul, Hasan A and Franstianto, Vincentius, Towards Explaining Deep Learning: Significance Tests for Multi-Layer Perceptrons (March 12, 2020). Available at SSRN: or

Hasan A Fallahgoul (Contact Author)

Monash University ( email )

Clayton Campus
Victoria, 3800


Vincentius Franstianto

Monash University ( email )

23 Innovation Walk
Wellington Road
Clayton, Victoria 3800

Here is the Coronavirus
related research on SSRN

Paper statistics

Abstract Views
PlumX Metrics