Online Network Revenue Management Using Thompson Sampling
Operations Research, Forthcoming
52 Pages Posted: 3 Apr 2015 Last revised: 5 Mar 2018
Date Written: November 7, 2017
We consider a price-based network revenue management problem where a retailer aims to maximize revenue from multiple products with limited inventory over a finite selling season. As common in practice, we assume the demand function contains unknown parameters, which must be learned from sales data. In the presence of these unknown demand parameters, the retailer faces a tradeoff commonly referred to as the exploration-exploitation tradeoff. Towards the beginning of the selling season, the retailer may offer several different prices to try to learn demand at each price ("exploration" objective). Over time, the retailer can use this knowledge to set a price that maximizes revenue throughout the remainder of the selling season ("exploitation" objective). We propose a class of dynamic pricing algorithms that builds upon the simple yet powerful machine learning technique known as Thompson sampling to address the challenge of balancing the exploration-exploitation tradeoff under the presence of inventory constraints. Our algorithms prove to have both strong theoretical performance guarantees as well as promising numerical performance results when compared to other algorithms developed for similar settings. Moreover, we show how our algorithms can be extended for use in general multi-armed bandit problems with resource constraints, with applications in other revenue management settings and beyond.
Keywords: revenue management, pricing, multi-armed bandit, Thompson sampling
Suggested Citation: Suggested Citation