Fluid Policies, Reoptimization, and Performance Guarantees in Dynamic Resource Allocation

Operations Research, to appear.

53 Pages Posted: 17 Nov 2022 Last revised: 23 Oct 2023

See all articles by David B. Brown

David B. Brown

Duke University - Decision Sciences

Jingwei Zhang

The Chinese University of Hong Kong, Shenzhen

Date Written: November 5, 2022

Abstract

Many sequential decision problems involve deciding how to allocate shared resources across a set of independent systems at each point in time. A classic example is the restless bandit problem, in which a budget constraint limits the selection of arms. Fluid relaxations provide a natural approximation technique for this broad class of problems. A recent stream of research has established strong performance guarantees for feasible policies based on fluid relaxations. In this paper, we generalize and improve these recent performance results. First, we provide easy-to-implement feasible fluid policies that achieve performance within O(\sqrt{N}) of optimal, where N is the number of subproblems. This result holds for a general class of dynamic resource allocation problems with heterogeneous subproblems and multiple shared resource constraints. Second, we show using a novel proof technique that a feasible fluid policy that chooses actions using a reoptimized fluid value function achieves performance within O(\sqrt{N}) of optimal as well. To the best of our knowledge, this performance guarantee is the first one for reoptimization for the general dynamic resource allocation problems that we consider. The scaling of the constants with respect to time in these results implies similar results in the infinite horizon setting. Finally, we develop and analyze a class of feasible \emph{fluid-budget balancing} policies that stay ``close'' to actions selected by an optimal fluid policy while simultaneously using as much of the shared resources as possible. We show that this policy achieves performance within O(1) of optimal under particular nondegeneracy assumptions. This result generalizes recent advances for restless bandit problems by considering (a) any finite number of actions for each subproblem and (b) heterogeneous subproblems with a fixed number of types. We demonstrate the use of these techniques on dynamic multi-warehouse inventory problems and find empirically that these fluid-based policies achieve excellent performance, as our theory suggests.

Keywords: weakly coupled stochastic dynamic programs, fluid relaxations, restless bandits, asymptotic optimality

JEL Classification: C61

Suggested Citation

Brown, David B. and Zhang, Jingwei, Fluid Policies, Reoptimization, and Performance Guarantees in Dynamic Resource Allocation (November 5, 2022). Operations Research, to appear., Available at SSRN: https://ssrn.com/abstract=4267615 or http://dx.doi.org/10.2139/ssrn.4267615

David B. Brown (Contact Author)

Duke University - Decision Sciences ( email )

100 Fuqua Drive
Durham, NC 27708-0120
United States

Jingwei Zhang

The Chinese University of Hong Kong, Shenzhen ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
199
Abstract Views
1,162
Rank
291,016
PlumX Metrics