Optimal Deep Neural Networks by Maximization of the Approximation Power
35 Pages Posted: 12 May 2020 Last revised: 11 Jun 2020
Date Written: June 10, 2020
We propose an optimal architecture for deep neural networks of given size. The optimal architecture obtains from maximizing the minimum number of linear regions approximated by a deep neural network with a ReLu activation function. The accuracy of the approximation function relies on the neural network structure, characterized by the number, dependence and hierarchy between the nodes within and across layers. For a given number of nodes, we show how the accuracy of the approximation improves as we optimally choose the width and depth of the network. More complex datasets naturally summon bigger-sized architectures that perform better applying our optimization procedure. A Monte-Carlo simulation exercise illustrates the outperformance of the optimised architecture against cross-validation methods and gridsearch for linear and nonlinear prediction models. The application of this methodology to the Boston Housing dataset confirms empirically the outperformance of our method against state-of the-art machine learning models.
Keywords: Deep neural networks, shallow networks, universal approximation theorem, ReLu activation function, Boston Housing dataset
JEL Classification: C45, C61
Suggested Citation: Suggested Citation