A Note on Dropping Experimental Subjects Who Fail a Manipulation Check
16 Pages Posted: 31 Oct 2015 Last revised: 10 Jan 2016
Date Written: October 14, 2015
Abstract
Dropping subjects after a post-treatment manipulation check is common practice across the social sciences, presumably to restrict estimates to a subpopulation of subjects who understand the experimental prompt. We show that this practice can lead to serious bias and argue for a focus on what is revealed without discarding subjects. Generalizing results developed in Lee (2009) and Zhang and Rubin (2003) to the case of multiple treatments, we provide sharp bounds for potential outcomes among those who would pass a manipulation check regardless of treatment assignment. These bounds may have large or infinite width, implying that this inferential target is often out of reach. As an application, we replicate Press, Sagan and Valentino (2013) with a design that does not drop subjects that failed the manipulation check and show that the findings are likely stronger than originally reported. We conclude with suggestions for practice, namely corrections to the experimental design.
Keywords: causal inference, randomized experiments, attrition, manipulation checks, partial identification, potential outcomes
JEL Classification: C42, C9
Suggested Citation: Suggested Citation
Here is the Coronavirus
related research on SSRN
