On Incomplete Learning and Certainty-Equivalence Control
Operations Research, Vol. 66, No. 4, July-August 2018, pp. 1136-1167
57 Pages Posted: 12 Jun 2017 Last revised: 1 Oct 2018
Date Written: November 17, 2017
Abstract
We consider a dynamic learning problem where a decision maker sequentially selects a control and observes a response variable that depends on chosen control and an unknown sensitivity parameter. After every observation, the decision maker updates her/his estimate of the unknown parameter and uses a certainty-equivalence decision rule to determine subsequent controls based on this estimate. We show that under this certainty-equivalence learning policy the parameter estimates converge with positive probability to an uninformative fixed point that can differ from the true value of the unknown parameter; a phenomenon that will be referred to as incomplete learning. In stark contrast, it will be shown that this certainty-equivalence policy may avoid incomplete learning if the parameter value of interest “drifts away” from the uninformative fixed point at a critical rate. Finally, we prove that one can adaptively limit the learning memory to improve the accuracy of the certainty-equivalence policy in both static (estimation), as well as slowly varying (tracking) environments, without relying on forced exploration.
Keywords: Dynamic Control, Sequential Estimation, Certainty Equivalence, Incomplete Learning
Suggested Citation: Suggested Citation