Online Learning and Pricing for Service Systems with Reusable Resources
70 Pages Posted: 18 Feb 2021 Last revised: 8 Nov 2022
Date Written: December 15, 2020
Abstract
We consider a price-based revenue management problem with finite reusable resources over a finite time horizon $T$. Customers arrive following a price-dependent Poisson process and each customer requests one unit of $c$ homogeneous reusable resources. If there is an available unit, the customer gets served within a price-dependent exponentially distributed service time; otherwise, the customer waits in a queue until the next available unit. In this paper, we assume that the firm does not know how the arrival and service rates depend on posted prices, and thus it makes adaptive pricing decisions in each period based only on past observations to maximize the cumulative revenue. Given a discrete price set with cardinality $P$, we propose two online learning algorithms, termed Batch Upper Confidence Bound (BUCB) and Batch Thompson Sampling (BTS), and prove that the cumulative regret upper bound is $\tilde{O}(\sqrt{PT})$, which matches the regret lower bound. In establishing the regret, we bound the transient system performance upon price changes via a novel coupling argument, and also generalize bandits to accommodate sub-exponential rewards. We also extend our approach to models with balking and reneging customers, and discuss a continuous price setting. Our numerical experiments demonstrate the efficacy of the proposed BUCB and BTS algorithms.
Keywords: learning, pricing, reusable resources, service systems, multi-armed bandit, coupling analysis
Suggested Citation: Suggested Citation