Asymptotic Optimality of Semi-Open-Loop Policies in Markov Decision Processes with Large Lead Times
52 Pages Posted: 20 Oct 2020 Last revised: 7 May 2023
Date Written: September 2, 2020
Abstract
We consider a generic Markov decision process (MDP) with two controls: one control taking effect immediately and the other control whose effect is delayed by a positive lead time. As the lead time grows, one would naturally expect that the effect of the delayed action only weakly depends on the current state, and decoupling the delayed action from the current state could provide good controls. The purpose of this paper is to substantiate this decoupling intuition by establishing asymptotic optimality of semi-open-loop policies, which specify open-loop controls for the delayed action and closed-loop controls for the immediate action.
For MDPs defined on general spaces with uniformly bounded cost functions and a fast mixing property, we construct a periodic semi-open-loop policy for each lead time value and show that these policies are asymptotically optimal as the lead time goes to infinity. For MDPs defined on Euclidean spaces with linear dynamics and convex structures (convex cost functions and convex constraint sets), we impose another set of conditions under which semi-open-loop policies (actually, constant delayed-control policies) are asymptotically optimal. Moreover,
we verify that these conditions hold for a broad class of inventory models, in which there are multiple controls with non-identical lead times.
Keywords: open-loop policy, asymptotic analysis, Markov decision process, lead time, inventory
Suggested Citation: Suggested Citation