Researchers’ Data Analysis Choices: An Excess of False Positives?
30 Pages Posted: 30 Jul 2018 Last revised: 14 Jun 2019
Date Written: June 2, 2019
The paper assesses empirical accounting research. It focuses on epistemological consequences of researchers’ data analysis methods – the ways the choices are made and the reasons for their potentially negative (and positive) consequences. A main issue concerns researchers’ standard approach, the pursuing of null rejection. Although papers predominately report on attaining such a finding, data analysis methods as commonly applied raise the likelihood of erroneous conclusions. First, researchers dismiss findings deviating from what is expected by viewing such unwanted outcomes as “preliminary” and thus require no reporting. Second, most settings rely on large N which more or less ensures a sufficiently small p-value. The paper argues that these two aspects tilt findings toward false positives (FP or “Type I errors”). Such errors become even more pervasive because, by convention, little-harm-is-done if a researcher publishes findings that later are viewed as questionable, or worse. The paper further explains why researchers do not consider the mitigation of FPs via supplementary data analyses; it would ensure a lower probability of rejecting the null-hypothesis. Discussions bring out the inherent shortcomings in the publication process. These tend to magnify the odds of FPs getting published. The paper argues that generally accepted practices have led to an equilibrium which will be difficult to dislodge. Nonetheless, most of the current literature will, in due course, be dismissed as at best dubious.
Keywords: Data Analyses
Suggested Citation: Suggested Citation