Calibration of Distributionally Robust Empirical Optimization Models

Forthcoming, Operations Research

51 Pages Posted: 18 Jun 2020 Last revised: 23 Jun 2020

See all articles by Jun-ya Gotoh

Jun-ya Gotoh

Chuo University - Department of Data Science for Business Innovation

Michael Jong Kim

Sauder School of Business, University of British Columbia

Andrew Lim

National University of Singapore (NUS) - Department of Decision Sciences; National University of Singapore (NUS) - Department of Finance; National University of Singapore (NUS) - Institute for Operations Research and Analytics

Date Written: February 14, 2020

Abstract

We study the out-of-sample properties of robust empirical optimization problems with smooth φ-divergence penalties and smooth concave objective functions, and develop a theory for data-driven calibration of the non-negative “robustness parameter” δ that controls the size of the deviations from the nominal model. Building on the intuition that robust optimization reduces the sensitivity of the expected reward to errors in the model by controlling the spread of the reward distribution, we show that the first-order benefit of “little bit of robustness” (i.e., δ small, positive) is a significant reduction in the variance of the out-of-sample reward while the corresponding impact on the mean is almost an order of magnitude smaller. One implication is that substantial variance (sensitivity) reduction is possible at little cost if the robustness parameter is properly calibrated. To this end, we introduce the notion of a robust mean-variance frontier to select the robustness parameter and show that it can be approximated using resampling methods like the bootstrap. Our examples show that robust solutions resulting from “open loop” calibration methods (e.g., selecting a 90% confidence level regardless of the data and objective function) can be very conservative out- of-sample, while those corresponding to the robustness parameter that optimizes an estimate of the out-of-sample expected reward (e.g., via the bootstrap) with no regard for the variance are often insufficiently robust.

Keywords: distributionally robust optimization, data driven optimization, out-of-sample performance, variance reduction, calibration, worst-case sensitivity

JEL Classification: C02, C13, C14, C44, C61

Suggested Citation

Gotoh, Jun-ya and Kim, Michael Jong and Lim, Andrew E. B., Calibration of Distributionally Robust Empirical Optimization Models (February 14, 2020). Forthcoming, Operations Research, Available at SSRN: https://ssrn.com/abstract=3602512

Jun-ya Gotoh

Chuo University - Department of Data Science for Business Innovation ( email )

1-13-27 Kasuga
Bunkyo-ku, Tokyo 112-8551
Japan
+81-3-3817-1928 (Phone)

HOME PAGE: http://www.indsys.chuo-u.ac.jp/~jgoto/

Michael Jong Kim

Sauder School of Business, University of British Columbia ( email )

Vancouver
Canada

Andrew E. B. Lim (Contact Author)

National University of Singapore (NUS) - Department of Decision Sciences ( email )

NUS Business School
Mochtar Riady Building, 15 Kent Ridge
Singapore, 119245
Singapore

National University of Singapore (NUS) - Department of Finance ( email )

Mochtar Riady Building
15 Kent Ridge Drive
Singapore, 119245
Singapore

National University of Singapore (NUS) - Institute for Operations Research and Analytics ( email )

Singapore

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
62
Abstract Views
291
rank
439,421
PlumX Metrics