How Big Should Your Data Really Be? Data-Driven Newsvendor: Learning One Sample at a Time
69 Pages Posted: 8 Jul 2021 Last revised: 27 Jul 2022
Date Written: March 15, 2021
Abstract
We study the classical newsvendor problem in which the decision-maker must trade-off un- derage and overage costs. In contrast to the typical setting, we assume that the decision-maker does not know the underlying distribution driving uncertainty but has only access to historical data. In turn, the key questions are how to map existing data to a decision and what type of performance to expect as a function of the data size. We analyze the classical setting with access to past samples drawn from the distribution (e.g., past demand), focusing not only on asymptotic performance but also on what we call the transient regime of learning, i.e., per- formance for arbitrary data sizes. We evaluate the performance of any algorithm through its worst-case relative expected regret, compared to an oracle with knowledge of the distribution. We provide the first finite sample exact analysis of the classical Sample Average Approximation (SAA) algorithm for this class of problems across all data sizes. This allows to uncover novel fundamental insights on the value of data: it reveals that tens of samples are sufficient to perform very efficiently but also that more data can lead to worse out-of-sample performance for SAA. We then focus on the general class of mappings from data to decisions without any restriction on the set of policies and derive an optimal algorithm (in the minimax sense) as well as characterize its associated performance. This leads to significant improvements for limited data sizes, and allows to exactly quantify the value of historical information.
Keywords: Limited data, data-driven decisions, minimax regret, approximation ratio, sample average approximation, empirical optimization, finite samples, robust optimization
Suggested Citation: Suggested Citation