Endogenous Networks in Random Population Games
Sant'Anna School of Advanced Studies, LEM Working Paper No. 2003/03
38 Pages Posted: 5 Aug 2003
Date Written: January 30, 2003
In the last years, many contributions have been exploring population learning in economies where myopic agents play bilateral games and are allowed to repeatedly choose their pure strategies in the game and, possibly, their opponents in the game. These models explore bilateral stage-games reflecting very simple strategic situations (e.g. coordination). Moreover, they assume that payoffs are common knowledge and all agents play the same game against the others. Therefore, population learning acts on smooth landscapes where individual payoffs are relatively stable across strategy configurations. In this paper, we address a preliminary investigation of dynamic population games with endogenous networks over 'rugged' landscapes, where agents face a strong uncertainty about expected payoffs from bilateral interactions. We propose a simple model where individual payoffs from playing a binary action against everyone else (conditional to any possible combination of actions performed by the others) are distributed as a i.i.d. U[0,1] r.v. We call this setting a 'random population game' and we study population adaptation over time when agents can update both actions and partners using deterministic, myopic, best reply rules. We assume that agents evaluate payoffs associated to networks where an agent is not linked with everyone else by using simple rules (i.e. statistics such as MIN, MAX, MEAN, etc.) computed on the distributions of payoffs associated to all possible action combinations performed by agents outside the interaction set. We investigate the long-run properties of the system by means of computer simulations. We show that both the LR behavior of the system (e.g. convergence to steady-states) and its short-run dynamic properties are strongly affected by: (i) the payoff rule employed; (ii) whether players are change-adverse or not. We find that if agents use the MEAN rule, then, irrespective of the change-aversion regime, the system displays multiplicity of steady-states. Populations always climb local optima by first using AU/NU together and then NU only. Climbing occurs through successful adaptation and generates LR positive correlation between number of links and average payoffs. With MIN or MAX rules, the LR behavior of the system is instead affected by whether players are change-adverse. If they are, and employ the MIN rule, then the network converges to a steady-state where all agents are (almost) fully connected but strategies are not, so that average payoffs oscillate. If agents employ the MAX rule then the system displays many steady-states (in both networks and actions) characterized by few links and different levels of average payoff. Finally, if agents are change-lovers, then the population can explore a larger portion of the landscape. Therefore, with agents using the MIN rule, the network will quickly approach to the complete one, but from then on exploration on strategies and networks will go on forever. If they employ the MAX rule, then the system will reach a unique payoff optimum. All populations converge to the same payoff distribution but neutral NU will continue forever (without affecting realized payoffs).
Keywords: Dynamic Population Games, Bounded Rationality, Endogenous Networks, Fitness Landscapes, Evolutionary Environments, Adaptive Expectations
JEL Classification: C72, C73, D80
Suggested Citation: Suggested Citation