Tractable Robust Supervised Learning Models

40 Pages Posted: 14 Dec 2021

See all articles by Melvyn Sim

Melvyn Sim

National University of Singapore (NUS) - NUS Business School

Long Zhao

NUS Business School - Department of Analytics and Operations

Minglong Zhou

Fudan University - School of Management

Date Written: December 9, 2021

Abstract

At the heart of supervised learning is a minimization problem with an objective function that evaluates a set of training data over a loss function that penalizes poor fitting and a regularization function that penalizes over-fitting to the training data. More recently, data-driven robust optimization based learning models provide an intuitive robustness perspective of regularization. However, when the loss function is not Lipschitz continuous, solving the robust learning models exactly can be computationally challenging. We focus on providing tractable approximations for robust regression and classification problems for loss functions derived from Lipschitz continuous functions raised to the power of p. We also show the equivalence of the type-p robust learning models to the pth-root regularization problems when the underlying support sets are unbounded. Inspired by Long et al. (2021), we also propose tractable type-p robust satisficing learning models that are specified by target loss parameters. We illustrate that the robust satisficing regression and classification models can be tractably solved for a large class of problems, and we also establish finite sample probabilistic guarantees for limiting losses beyond the specified target. While the family of solutions generated by regularization and robust satisficing can be the same, from empirical studies on popular datasets, the relative targets for reasonably good out-of-sample performance can be found within a narrow range. We also demonstrate in the numerical study that the target-based hyper-parameter is easier to determine via cross-validation and can improve out-of-sample performance compared to standard regularization approaches.

Keywords: Regression, classification, regularization, robust optimization, robust satisficing

Suggested Citation

Sim, Melvyn and Zhao, Long and Zhou, Minglong, Tractable Robust Supervised Learning Models (December 9, 2021). Available at SSRN: https://ssrn.com/abstract=3981205 or http://dx.doi.org/10.2139/ssrn.3981205

Melvyn Sim

National University of Singapore (NUS) - NUS Business School ( email )

1 Business Link
Singapore, 117592
Singapore

Long Zhao

NUS Business School - Department of Analytics and Operations ( email )

15 Kent Ridge Dr
Singapore, Singapore 119245
Singapore

Minglong Zhou (Contact Author)

Fudan University - School of Management ( email )

No. 670, Guoshun Road
No.670 Guoshun Road
Shanghai, 200433
China

HOME PAGE: http://https://sites.google.com/view/minglongzhou

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
212
Abstract Views
749
Rank
229,658
PlumX Metrics