Transfer of Conflict and Cooperation from Experienced Games to New Games: A Connectionist Model of Learning

Frontiers in Neuroscience 9. doi:10.3389/fnins.2015.00102.

Posted: 19 Jun 2015

See all articles by Leonidas Spiliopoulos

Leonidas Spiliopoulos

Max Planck Society for the Advancement of the Sciences - Max Planck Institute for Human Development

Date Written: March 31, 2015

Abstract

The question of whether, and if so how, learning can be transferred from previously experienced games to novel games has recently attracted the attention of the experimental game theory literature. Existing research presumes that learning operates over actions, beliefs or decision rules. This study instead uses a connectionist approach that learns a direct mapping from game payoffs to a probability distribution over own actions. Learning is operationalized as a backpropagation rule that adjusts the weights of feedforward neural networks in the direction of increasing the probability of an agent playing a myopic best response to the last game played. One advantage of this approach is that it expands the scope of the model to any possible n × n normal-form game allowing for a comprehensive model of transfer of learning. Agents are exposed to games drawn from one of seven classes of games with significantly different strategic characteristics and then forced to play games from previously unseen classes. I find significant transfer of learning, i.e., behavior that is path-dependent, or conditional on the previously seen games. Cooperation is more pronounced in new games when agents are previously exposed to games where the incentive to cooperate is stronger than the incentive to compete, i.e., when individual incentives are aligned. Prior exposure to Prisoner's dilemma, zero-sum and discoordination games led to a significant decrease in realized payoffs for all the game classes under investigation. A distinction is made between superficial and deep transfer of learning both — the former is driven by superficial payoff similarities between games, the latter by differences in the incentive structures or strategic implications of the games. I examine whether agents learn to play the Nash equilibria of games, how they select amongst multiple equilibria, and whether they transfer Nash equilibrium behavior to unseen games. Sufficient exposure to a strategically heterogeneous set of games is found to be a necessary condition for deep learning (and transfer) across game classes. Paradoxically, superficial transfer of learning is shown to lead to better outcomes than deep transfer for a wide range of game classes. The simulation results corroborate important experimental findings with human subjects, and make several novel predictions that can be tested experimentally.

Keywords: transfer of learning, game theory, cooperation and conflict, connectionist modeling, neural networks and behavior, agent-based modeling

Suggested Citation

Spiliopoulos, Leonidas, Transfer of Conflict and Cooperation from Experienced Games to New Games: A Connectionist Model of Learning (March 31, 2015). Frontiers in Neuroscience 9. doi:10.3389/fnins.2015.00102., Available at SSRN: https://ssrn.com/abstract=2587823

Leonidas Spiliopoulos (Contact Author)

Max Planck Society for the Advancement of the Sciences - Max Planck Institute for Human Development ( email )

Lentzeallee 94
D-14195 Berlin, 14195
Germany

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
220
PlumX Metrics