Selection in Surveys: Using Randomized Incentives to Detect and Account for Nonresponse Bias

82 Pages Posted: 7 Dec 2021 Last revised: 29 Mar 2023

See all articles by Deniz Dutz

Deniz Dutz

University of Chicago

Ingrid Huitfeldt

Statistics Norway

Santiago Lacouture

University of Chicago

Magne Mogstad

University of Chicago

Alexander Torgovitsky

University of Chicago

Winnie van Dijk

Harvard University

Multiple version iconThere are 2 versions of this paper

Date Written: December 2021

Abstract

We show how to use randomized participation incentives to test and account for nonresponse bias in surveys. We first use data from a survey about labor market conditions, linked to full-population administrative data, to provide evidence of large differences in labor market outcomes between participants and nonparticipants, differences which would not be observable to an analyst who only has access to the survey data. These differences persist even after correcting for observable characteristics, raising concerns about nonresponse bias in survey responses. We then use the randomized incentives in our survey to directly test for nonresponse bias, and find strong evidence that the bias is substantial. Next, we apply a range of existing methods that account for nonresponse bias and find they produce bounds (or point estimates) that are either wide or far from the ground truth. We investigate the failure of these methods by taking a closer look at the determinants of participation, finding that the composition of participants changes in opposite directions in response to incentives and reminder emails. We develop a model of participation that allows for two dimensions of unobserved heterogeneity in the participation decision. Applying the model to our data produces bounds (or point estimates) that are narrower and closer to the ground truth than the other methods. Our results highlight the benefits of including randomized participation incentives in surveys. Both the testing procedure and the methods for bias adjustment may be attractive tools for researchers who are able to embed randomized incentives into their survey.

Suggested Citation

Dutz, Deniz and Huitfeldt, Ingrid and Lacouture, Santiago and Mogstad, Magne and Torgovitsky, Alexander and van Dijk, Winnie, Selection in Surveys: Using Randomized Incentives to Detect and Account for Nonresponse Bias (December 2021). NBER Working Paper No. w29549, Available at SSRN: https://ssrn.com/abstract=3978407

Deniz Dutz (Contact Author)

University of Chicago ( email )

1101 East 58th Street
Chicago, IL 60637
United States

Ingrid Huitfeldt

Statistics Norway ( email )

N-0033 Oslo
Norway

Santiago Lacouture

University of Chicago ( email )

1101 East 58th Street
Chicago, IL 60637
United States

Magne Mogstad

University of Chicago ( email )

1101 East 58th Street
Chicago, IL 60637
United States

Alexander Torgovitsky

University of Chicago ( email )

1101 East 58th Street
Chicago, IL 60637
United States

Winnie Van Dijk

Harvard University

Littauer Center of Public Administration
Cambridge, MA 02138
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
28
Abstract Views
183
PlumX Metrics