A Model of Adaptive Reinforcement Learning
17 Pages Posted: 3 Apr 2019
Date Written: March 11, 2019
We develop a model of learning that extends the classic models of reinforcement learning to a continuous, multidimensional strategy space. The model takes advantage of the recent approximation methods to tackle the curse of dimensionality inherent to a traditional discretization approach. Crucially, the model endogenously partitions strategies into sets of similar strategies, and allows agents to learn over these sets which speeds up the learning process. We provide an application of our model to predict which memory-1 mixed strategies will be played in the indenitely repeated Prisoner's Dilemma game. We show that despite allowing the mixed strategies, strategies close to the pure strategies always defect, grim trigger, and tit-for-tat emerge -- a result that qualitatively matches recent strategy choice experiments with human subjects.
Keywords: Reinforcement Learning, Repeated-game Strategies, Repeated Prisoner's Dilemma, Mixed Strategies, Agent-based Models, Markov Strategies
Suggested Citation: Suggested Citation