Does Method Matter? Assessing the Correspondence between Experimental and Nonexperimental Results from a School Voucher Program Evaluation
71 Pages Posted: 14 Apr 2017 Last revised: 29 Oct 2020
Date Written: October 1, 2019
Background. Randomized Controlled Trials are the “gold-standard” for estimating causal impacts of education programs. They are not always feasible, however, and may not generalize to the population of interest. Generally, researchers cannot measure selection bias in quasi-experimental estimates, leaving us uncertain about the true program impact.
Objective. This study assesses the performance of propensity score matching, kernel matching, and multivariate regression in replicating the experimental results of a District of Columbia school voucher evaluation.
Research Design. We assess whether nonexperimental methods can replicate experimental results among two samples: the experimental sample, in which treated and nontreated students are similar in terms of their eligibility status and desire to apply for the program, and a broader nonexperimental sample that includes geographically similar comparison units who did not apply for the program.
Results. Nonexperimental methods more closely replicate experimental estimates when the sample is limited to program applicants. Little evidence suggests that a particular nonexperimental method performs better than other approaches. The direction of the bias in the quasi-experimental estimates tends to be positive when the sample is limited to program applicants, but negative when expanded to non-applicants. This pattern suggests that voucher program applicants are negatively selected on unmeasured characteristics but voucher users are positively selected on unmeasured factors.
Keywords: school vouchers, school choice, within-study comparison, randomized controlled trial, quasi-experimental design, internal validity, external validity, selection bias
Suggested Citation: Suggested Citation