Model Uncertainty and Robustness Tests: Towards a New Logic of Statistical Inference
Vienna University of Economics and Business - Department of Socioeconomics; University of Essex - Department of Government
London School of Economics and Political Science (LSE)
August 14, 2012
We augment the increasingly common practice of typically ad hoc robustness tests into a research methodology that allows reliable inferences when researchers do not know the true data-generating process. We identify three principal sources of model uncertainty. First, theories simplify and aim at isolating causal mechanisms under ceteris paribus conditions. Theories thus cannot sufficiently specify the data-generating process. Second, empirical researchers are uncertain about what constitutes the population – the set of cases for which a theory claims validity – from which to draw a representative sample as required for econometric estimation. Finally, model uncertainty also results from the fact that many concepts used in social science theories cannot be observed and therefore can only be measured by proxy variables, which deviate in unknown, but potentially systematic ways from the theoretical concepts. These three causes of model uncertainty result in at least 16 specific aspects of uncertainty about model specification that can be subjected to various robustness tests. Defining robustness as stability of causal inferences, we argue against the misplaced focus on single point estimates from any specific test. Instead, we suggest scholars employ model averaging techniques to assess robustness in terms of the entire set of empirical evidence from the baseline and all robustness test models. This method avoids the type-II error bias of current practice and encourages researchers to undertake a larger set of robustness tests instead of carefully hand-selecting only those tests that do not deviate from their baseline model results. We illustrate our proposal using an example of an existing study by the authors on the political determinants of famine mortality, which contained only two robustness tests and thus begs the question whether the causal inferences are fragile. Subjecting our baseline model to a battery of additional tests, we conclude that our causal inferences are in fact robust: democracies respond more elastically than autocracies to the simultaneous presence of international food aid and a large share of the population being affected, which results in lower famine mortality.
Number of Pages in PDF File: 37
Keywords: model uncertainty, misspecification, robustness, inferences, model averaging
Date posted: August 14, 2012