Too Big to Fail: Larger Samples and False Discoveries
37 Pages Posted: 3 Feb 2009 Last revised: 25 Jul 2011
Date Written: June 15, 2011
Abstract
The Internet presents great opportunities for research about information technology, allowing IS researchers to collect very large and rich datasets. It is common to see research papers with tens or even hundreds of thousands of data points, especially when reading about electronic commerce. Large samples are better than smaller samples in that they provide greater statistical power and produce more precise estimates. However, statistical inference using p-values does not scale up to large samples and often leads to erroneous conclusions. We find evidence of an over-reliance on p-values in large sample IS studies in top IS journals and conferences. In this commentary, we focus on interpreting effects of individual independent variables on a dependent variable in regression-type models. We discuss how p-values become deflated with a large sample and illustrate this deflation in analyzing data from over 340,000 digital camera auctions on eBay. The commentary recommends that IS researchers be more conservative in interpreting statistical significance in large sample studies, and instead, interpret results in terms of practical significance. In particular, we suggest that authors of large-sample IS studies report and discuss confidence intervals for independent variables of interest rather than coefficient signs and p-values. We also suggest taking advantage of a large dataset for examining how coefficients and p-values change as sample size increases, and for estimating models on multiple subsamples to further test robustness.
Keywords: large samples, p-values, statistical significance, practical significance
Suggested Citation: Suggested Citation