Markov Decision Processes with Exogenous Variables
28 Pages Posted: 21 Feb 2017 Last revised: 6 Feb 2018
Date Written: February 5, 2018
Abstract
I present two algorithms for solving dynamic programs with exogenous variables: endogenous value iteration and endogenous policy iteration. These algorithms are like relative value iteration and relative policy iteration, except they discard the variation in the value function due solely to the exogenous variables (this variation doesn't affect the policy function). My algorithms are always at least as fast as relative value iteration and relative policy iteration, and are faster when the endogenous variables converge to their stationary distributions faster than the exogenous variables.
Keywords: Markov Decision Process; Dynamic Programming; Relative Value Iteration; Strong Convergence; Exogenous Variables
Suggested Citation: Suggested Citation