Markov Games with Frequent Actions and Incomplete Information
37 Pages Posted: 25 Oct 2013 Last revised: 24 Jun 2015
Date Written: October 24, 2013
We study a two-player, zero-sum, stochastic game with incomplete information on one side in which the players are allowed to play more and more frequently. The informed player observes the realization of a Markov chain on which the payoffs depend, while the non-informed player only observes his opponent's actions. We show the existence of a limit value as the time span between two consecutive stages vanishes; this value is characterized through an auxiliary optimization problem and as the solution of an Hamilton-Jacobi equation.
Suggested Citation: Suggested Citation