A Theory of Markovian Time Inconsistent Stochastic Control in Continuous Time
44 Pages Posted: 4 Feb 2016
Date Written: February 2, 2016
Abstract
In this paper, which is a continuation of the discrete time paper, we develop a theory for continuous time stochastic control problems which, in various ways, are time inconsistent in the sense that they do not admit a Bellman optimality principle. We study these problems within a game theoretic framework, and we look for Nash subgame perfect equilibrium points. Within the framework of a controlled SDE and a fairly general objective functional we derive an extension of the standard Hamilton-Jacobi-Bellman equation, in the form of a system of non-linear equations, for the determination for the equilibrium strategy as well as the equilibrium value function. As applications of the general theory we study non exponential discounting as well as a time inconsistent linear quadratic regulator. We also present a study of time inconsistency within the framework of a general equilibrium production economy of Cox-Ingersoll-Ross type.
Keywords: Time consistency, time inconsistency, time inconsistent control, dynamic programming, stochastic control, Bellman equation, hyperbolic discounting, mean-variance, equilibrium
JEL Classification: C61, C72, C73, G11
Suggested Citation: Suggested Citation