Dangerous Deep Learning: How The Machines Can Hit The Wall
23 Pages Posted: 16 Jun 2020
Date Written: May 22, 2020
Abstract
Democratization of the mysterious art of data science via Amazon Web Services, Google Cloud Platform, Microsoft Azure, and other machine learning (ML) service providers might make it too easy to apply ML. To underline this we present a worked-out example in R (including sources in the appendix), where deep learning falls behind much simpler methods or learns something that is not there. We start with an already published application of a LeNet style convolutional neural network (CNN) for image recognition. We show that this complex CNN is outperformed by a single layer perceptron and that a logistic regression comes close if done naively and also outperforms if a transformation is applied to the inputs. The reason for this is highlighted by visual data analysis. Then we exemplify how a multi-layer perceptron (MLP) is lured into learning a function y=f(x), where there is no relation between y and x. Summing up, we demonstrate how a mighty CNN learns less than a two-feature based logistic regression and how a simple MLP learns more than a linear regression when there is nothing to learn! One way to avoid these pitfalls is academic training in data science.
Keywords: Image Recognition, CNN, MLP, Spurious Regression, Microsoft Azure
JEL Classification: C01, C02, C19, C55
Suggested Citation: Suggested Citation