Naive Learning Through Probability Over-Matching
27 Pages Posted: 11 Mar 2019 Last revised: 13 Aug 2020
Date Written: February 19, 2019
Abstract
We analyze boundedly rational updating in a repeated interaction network model with binary actions and binary states. Agents form beliefs according to discretized DeGroot updating and apply a decision rule that assigns a (mixed) action to each belief. We first show that under weak assumptions random decision rules are sufficient to achieve agreement in finite time in any strongly connected network. Our main result establishes that naive learning can be achieved in any large strongly connected network. That is, if beliefs satisfy a high level of inertia, then there exist corresponding decision rules coinciding with probability over-matching such that the eventual agreement action matches the true state, with a probability converging to one as the network size goes to infinity.
Suggested Citation: Suggested Citation