Safe Data Collection for Offline and Online Policy Learning

43 Pages Posted: 10 Nov 2021 Last revised: 5 Aug 2022

Date Written: November 8, 2021


Motivated by practical needs of experimentation and policy learning in online platforms, we study the problem of \emph{safe data collection}. Specifically, our goal is to develop a \emph{logging policy} that efficiently explores different actions to elicit information while achieving competitive reward compared with a baseline \emph{production policy}. We first show that a common practice of mixing the production policy with randomized exploration, despite being safe, is sub-optimal in maximizing information gain. Then, we propose a safe optimal logging policy via a novel water-filling technique for the case when no side information about the actions' expected reward is available. We improve upon this design by considering side information and also extend our approaches to the linear contextual model to account for a large number of actions.

Along the way, we analyze how our data logging policies impact errors in off(line)-policy learning and empirically validate the benefit of our design by conducting extensive numerical experiments with synthetic and MNIST datasets. To further demonstrate the generality of our approach, we also consider the safe online learning setting. By adaptively applying our techniques, we develop the Safe Phased-Elimination (\spe) algorithm that can achieve optimal regret bound with only logarithmic number of policy updates.

Keywords: optimal design, experimental design, safety, off-policy learning

Suggested Citation

Zhu, Ruihao and Kveton, Branislav, Safe Data Collection for Offline and Online Policy Learning (November 8, 2021). Available at SSRN: or

Ruihao Zhu (Contact Author)

Cornell University ( email )

Ithaca, NY 14853
United States

Branislav Kveton

Amazon Science ( email )

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
PlumX Metrics