Adaptive Supervised Learning for Volatility Targeting Models
12 Pages Posted: 20 Sep 2021
Date Written: September 15, 2021
Abstract
In the context of risk-based portfolio construction and pro-active risk management, finding robust predictors of future realised volatility is paramount to achieving optimal performance. Volatility has been documented in economics literature to exhibit pronounced persistence with clusters of high or low volatility regimes and to mean-revert to a normal level, underpinning Nobel prize-winning work on Generalized Autoregressive Heteroskedastic (GARCH) models. From a Reinforcement Learning (RL) point of view, this process can be interpreted as a model-based RL approach where the goal of the models is twofold: first, to represent the volatility dynamics and forecast its term structure and second, to compute a resulting allocation to match a given target volatility: hence the name ”volatility targeting method for risk-based portfolios”. However, the resulting volatility model-based RL approaches are hard to distinguish as each model results in similar performance without a clear dominant one. We therefore present an innovative approach with an additional supervised learning step to predict the best model(s), based on historical performance ordering of RL models. Our contribution shows that adding a supervised learning overlay to decide which model(s) to use provides improvement over a naive benchmark consisting in averaging all RL models. A salient ingredient in this supervised learning task is to adaptively select features based on their significance, thanks to minimum importance filtering. This work extends our previous work on combining model-free and model-based RL. It mixes different types of learning procedures, namely model-based RL and supervised learning opening new doors to combine different machine learning approaches.
Suggested Citation: Suggested Citation