Towards Explaining Deep Learning: Significance Tests for Multi-Layer Perceptrons

33 Pages Posted: 3 Apr 2020

Date Written: March 12, 2020

Abstract

Horel and Giesecke (2019) (HG) propose a gradient-based test statistic for the one-layer sigmoid neural networks and study its asymptotics using nonparametric techniques. However, their results are not adequate for the most useful and attractive architectures of neural networks, e.g., multi-layer perceptrons. This paper extends their results to the fully connected feed-forward neural networks where the activation function is a general bounded, non-polynomial, and infinitely differentiable (smooth). To derive the test statistic and its asymptotic distribution, we provide the consistency of the multi-layer perceptrons via the method of sieves. Like HG the significant test offers several desirable characteristics such as computational efficiency and ranking variables based on their influence, but, unlike HG the significant test can be used for the multi-layer perceptrons. To validate the theoretical results, a Monte Carlo analysis is presented.

Keywords: Neural Networks, Significant Test, Asymptotic Distribution

JEL Classification: C1, C5

Suggested Citation

Fallahgoul, Hasan A and Franstianto, Vincentius, Towards Explaining Deep Learning: Significance Tests for Multi-Layer Perceptrons (March 12, 2020). Available at SSRN: https://ssrn.com/abstract=3553253 or http://dx.doi.org/10.2139/ssrn.3553253

Hasan A Fallahgoul (Contact Author)

Monash University ( email )

Clayton Campus
Victoria, 3800
Australia

HOME PAGE: http://www.hfallahgoul.com

Vincentius Franstianto

Monash University ( email )

23 Innovation Walk
Wellington Road
Clayton, Victoria 3800
Australia

Here is the Coronavirus
related research on SSRN

Paper statistics

Downloads
30
Abstract Views
417
PlumX Metrics