Dynamic Programs with Shared Resources and Signals: Dynamic Fluid Policies and Asymptotic Optimality

73 Pages Posted: 17 Dec 2020 Last revised: 29 Jul 2021

See all articles by David B. Brown

David B. Brown

Duke University - Decision Sciences

Jingwei Zhang

Duke University - Decision Sciences

Date Written: November 10, 2020

Abstract

We consider a sequential decision problem involving shared resources and signals in which a decision maker repeatedly observes some exogenous information (the signal), modeled as a finite-state Markov process, then allocates a limited amount of a shared resource across a set of projects. The framework includes a number of applications and generalizes Markovian multi-armed bandit problems by (a) incorporating exogenous information through the signal and (b) allowing for more general resource allocation decisions. Such problems are naturally formulated as stochastic dynamic programs (DPs) but solving the DP is impractical unless the number of projects is small. In this paper, we develop a Lagrangian relaxation and a DP formulation of the corresponding fluid relaxation --- a dynamic fluid relaxation --- that provide upper bounds on the optimal value function as well as a feasible policy. We develop an iterative primal-dual algorithm for solving the dynamic fluid relaxation and analyze the performance of the feasible dynamic fluid policy. Our performance analysis implies, under mild conditions, that the dynamic fluid relaxation bound and feasible policy are asymptotically optimal as the number of projects grows large. Our Lagrangian relaxation uses Lagrange multipliers that depend on the history of past signals in each period: we show that the bounds and analogous policies using restricted forms of Lagrange multipliers (e.g., only depending on the current signal state in each period) in general lead to a performance gap that is linear in the number of projects and thus are not asymptotically optimal in the regime of many projects. We demonstrate the model and results in two applications: (i) a dynamic capital budgeting problem and (ii) a multi-location inventory management problem with limited production capacity and demands that are correlated across locations by a changing market state.

Keywords: Weakly coupled stochastic dynamic programs, Lagrangian relaxations, approximate dynamic programming, capital budgeting, inventory control.

JEL Classification: C61

Suggested Citation

Brown, David B. and Zhang, Jingwei, Dynamic Programs with Shared Resources and Signals: Dynamic Fluid Policies and Asymptotic Optimality (November 10, 2020). Available at SSRN: https://ssrn.com/abstract=3728111 or http://dx.doi.org/10.2139/ssrn.3728111

David B. Brown (Contact Author)

Duke University - Decision Sciences ( email )

Durham, NC 27708-0120
United States

Jingwei Zhang

Duke University - Decision Sciences ( email )

Durham, NC 27708-0120
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
168
Abstract Views
856
rank
229,506
PlumX Metrics