Nashbots: How Political Scientists Have Underestimated Human Rationality, and How to Fix It
18 Pages Posted: 28 Nov 2016 Last revised: 8 Feb 2017
Date Written: November 23, 2016
Political scientists use experiments to test the predictions of game-theoretic models. In a typical experiment, each subject makes choices that determine her own earnings and the earnings of other subjects, with payments corresponding to the utility payoffs of a theoretical game. But social preferences distort the correspondence between a subject’s cash earnings and her subjective utility, and since social preferences vary, anonymously matched subjects cannot know their opponents’ preferences between outcomes, turning many laboratory tasks into games of incomplete information. We reduce the distortion of social preferences by pitting subjects against algorithmic agents (“Nashbots”). Across 11 experimental tasks, subjects facing human opponents played rationally only 36% of the time, but those facing algorithmic agents did so 60% of the time. We conclude that experimentalists have underestimated the economic rationality of laboratory subjects by designing tasks that are poor analogies to the games they purport to test.
Suggested Citation: Suggested Citation