Theory and Evidence in International Conflict: A Response to De Marchi, Gelpi, and Grynaviski
American Political Science Review, Vol. 98, No. 2, pp. 379-389, May 2004
11 Pages Posted: 13 Jan 2008
We thank Scott de Marchi, Christopher Gelpi, and Jeffrey Grynaviski (2003; hereinafter dGG) for their careful attention to our work (Beck, King, and Zeng, 2000; hereinafter BKZ) and for raising some important methodological issues that we agree deserve readers' attention. We are pleased that dGG's analyses are consistent with the theoretical conjecture about international conflict put forward in BKZ - The causes of conflict, theorized to be important but often found to be small or ephemeral, are indeed tiny for the vast majority of dyads, but they are large stable and replicable whenever the ex ante probability of conflict is large (BKZ, p.21) - and that dGG agree with our main methodological point that out-of-sample forecasting performance should always be one of the standards used to judge studies of international conflict, and indeed most other areas of political science.
However, dGG frequently err when they draw methodological conclusions. Their central claim involves the superiority of logit over neural network models for international conflict data, as judged by forecasting performance and other properties such as ease of use and interpretation (neural networks hold few unambiguous advantages . . . and carry significant costs relative to logit; dGG, p. 14). We show here that this claim, which would be regarded as stunning in any of the diverse fields in which both methods are more commonly used, is false. We also show that dGG's methodological errors and the restrictive model they favor cause them to miss and mischaracterize crucial patterns in the causes of international conflict.
We begin in the next section by summarizing the growing support for our conjecture about international conflict. The second section discusses the theoretical reasons why neural networks dominate logistic regression, correcting a number of methodological errors. The third section then demonstrates empirically, in the same data as used in BKZ and dGG, that neural networks substantially outperform dGG's logit model. We show that neural networks improve on the forecasts from logit as much as logit improves on a model with no theoretical variables. We also show how dGG's logit analysis assumed, rather than estimated, the answer to the central question about the literature's most important finding, the effect of democracy on war. Since this and other substantive assumptions underlying their logit model are wrong, their substantive conclusion about the democratic peace is also wrong. The neural network models we used in BKZ not only avoid these difficulties, but they, or one of the other methods available that do not make highly restrictive assumptions about the exact functional form, are just what is called for to study the observable implications of our conjecture.
Suggested Citation: Suggested Citation