Learning in Repeated Public Goods Games - A Meta Analysis

40 Pages Posted: 12 Sep 2018

Date Written: December 01, 2015


I examine the generalizability of a broad range of prominent learning models in explaining contribution patterns in repeated linear public goods games. Experimental data from twelve previously published papers are considered in testing several learning models in terms of how accurately they describe individuals’ round-by-round choices. The experimental data are split into 18 datasets. Each of these datasets is different from the remaining in at least one of the following aspects: the marginal per capita return, group size, matching protocol, number of rounds, and endowment that determines the number of stage game strategies. Both ex-post descriptive fit of learning models and their ex-ante predictive accuracy are examined. The following learning models are included in the study: reinforcement learning, normalized reinforcement learning, reinforcement average model with loss aversion strategy (REL), stochastic fictitious play, normalized stochastic fictitious play, experience weighted attraction learning (EWA), self-tuning EWA, and Impulse matching learning. REL outperforms all other learning models in both within dataset descriptive fit and out-of-sample dataset predictive accuracy. While all the learning models out-perform the random choice benchmark, only REL performs at least as well as the model that reflects dataset level overall empirical frequencies. The results suggest that learning in repeated linear public goods games is more in line with reinforcement learning than that of belief learning or regret-based learning. Finally, REL also outperforms individual evolutionary learning (IEL) in predicting the full distribution of contributions. Average reinforcement learning that is sensitive to the observed payoff variability and insensitive to the payoff magnitude underlie the success of REL in explaining contributions in repeated public goods games over a broad spectrum of game parameters.

Keywords: public goods games, learning, reinforcement learning, belief learning

JEL Classification: C63, C92, D83, H41

Suggested Citation

Cotla, Chenna Reddy, Learning in Repeated Public Goods Games - A Meta Analysis (December 01, 2015). Available at SSRN: https://ssrn.com/abstract=3241779 or http://dx.doi.org/10.2139/ssrn.3241779

Chenna Reddy Cotla (Contact Author)

American Institutes for Research ( email )

1990 K Street, NW
Washington, DC 20006-1107
United States

HOME PAGE: http://https://www.air.org/

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics