Sniff Tests as a Screen in the Publication Process: Throwing Out the Wheat with the Chaff
44 Pages Posted: 17 Sep 2018 Last revised: 21 Jun 2023
There are 2 versions of this paper
Sniff Tests in Economics: Aggregate Distribution of Their Probability Values and Implications for Publication Bias
Date Written: September 2018
Abstract
The increasing demand for empirical rigor has led to the growing use of auxiliary tests (balance, specification, over-identification, placebo, etc.) in assessing the credibility of a paper’s main results. We dub these “sniff tests” because rejection is bad news for the author and standards for passing are informal. Using a sample of nearly 30,000 published sniff tests collected from scores of economics journals, we study the use of sniff tests as a screen in the publication process. For the subsample of balance tests in randomized controlled trials, our structural estimates suggest that the publication process removes 46% of significant sniff tests, yet only one in ten of these is actually misspecified. For other tests, we estimate more latent misspecifiation and less removal. Surprisingly, more authors would be justified in attributing significant sniff tests to random bad luck.
Suggested Citation: Suggested Citation