Misclassification in Difference-in-Differences Models

33 Pages Posted: 8 Aug 2022

See all articles by Augustine Denteh

Augustine Denteh

Tulane University - Department of Economics

Désiré Kédagni

Iowa State University - Department of Economics

Date Written: August 4, 2022


The difference-in-differences (DID) design is one of the most popular methods used in empirical economics research. However, there is almost no work examining what the DID method identifies in the presence of a misclassified treatment variable. This paper studies the identification of treatment effects in DID designs when the treatment is misclassified. Misclassification arises in various ways, including when the timing of a policy intervention is ambiguous or when researchers need to infer treatment from auxiliary data. We show that the DID estimand is biased and recovers a weighted average of the average treatment effects on the treated (ATT) in two subpopulations - the correctly classified and misclassified groups. In some cases, the DID estimand may yield the wrong sign and is otherwise attenuated. We provide bounds on the ATT when the researcher has access to information on the extent of misclassification in the data. We demonstrate our theoretical results using simulations and provide two empirical applications to guide researchers in performing sensitivity analysis using our proposed methods.

Keywords: Difference-in-differences, average treatment effect on the treated, misclassification

JEL Classification: C14, C31, C35, C36

Suggested Citation

Denteh, Augustine and Kédagni, Désiré, Misclassification in Difference-in-Differences Models (August 4, 2022). Available at SSRN: https://ssrn.com/abstract=4181736 or http://dx.doi.org/10.2139/ssrn.4181736

Augustine Denteh

Tulane University - Department of Economics ( email )

New Orleans, LA 70118
United States

Désiré Kédagni (Contact Author)

Iowa State University - Department of Economics ( email )

260 Heady Hall
Ames, IA 50011
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics