Adaptive Learning for Financial Markets Mixing Model-Based and Model-Free RL for Volatility Targeting

Forthcoming in AAMAS ALA 2021 workshop, Machine Learning Group, LAMSADE, Dauphine University

MILES Working paper

10 Pages Posted: 30 Apr 2021 Last revised: 14 Jun 2021

See all articles by Eric Benhamou

Eric Benhamou

Université Paris Dauphine; EB AI Advisory; AI For Alpha

David Saltiel

Université Paris Dauphine; A.I. Square Connect; AI For Alpha

Serge Tabachnik

Lombard Odier Investment Managers

Sui Kai Wong

Lombard Odier Investment Management

François Chareyron

Lombard Odier Investment Managers

Date Written: April 19, 2021

Abstract

Model-Free Reinforcement Learning has achieved meaningful results in stable environments but, to this day, it remains problematic in regime changing environments like financial markets. In contrast, model-based RL is able to capture some fundamental and dynamical concepts of the environment but suffer from cognitive bias. In this work, we propose to combine the best of the two techniques by selecting various model-based approaches thanks to Model-Free Deep Reinforcement Learning. Using not only past performance and volatility, we include additional contextual information such as macro and risk appetite signals to account for implicit regime changes. We also adapt traditional RL methods to real-life situations by considering only past data for the training sets. Hence, we cannot use future information in our training data set as implied by K-fold cross validation. Building on traditional statistical methods, we use the traditional "walk-forward analysis", which is defined by successive training and testing based on expanding periods, to assert the robustness of the resulting agent.

Finally, we present the concept of statistical difference's significance based on a two-tailed T-test, to highlight the ways in which our models differ from more traditional ones. Our experimental results show that our approach outperforms traditional financial baseline portfolio models such as the Markowitz model in almost all evaluation metrics commonly used in financial mathematics, namely net performance, Sharpe and Sortino ratios, maximum drawdown, maximum drawdown over volatility.

Keywords: deep reinforcement learning, volatility targetting, model based RL, model free RL

JEL Classification: G1, G11, D5

Suggested Citation

Benhamou, Eric and Saltiel, David and Tabachnik, Serge and Wong, Sui Kai and Chareyron, François, Adaptive Learning for Financial Markets Mixing Model-Based and Model-Free RL for Volatility Targeting (April 19, 2021). Forthcoming in AAMAS ALA 2021 workshop, Machine Learning Group, LAMSADE, Dauphine University, MILES Working paper, Available at SSRN: https://ssrn.com/abstract=3830012 or http://dx.doi.org/10.2139/ssrn.3830012

Eric Benhamou (Contact Author)

Université Paris Dauphine ( email )

Place du Maréchal de Tassigny
Paris, Cedex 16 75775
France

EB AI Advisory ( email )

35 Boulevard d'Inkermann
Neuilly sur Seine, 92200
France

AI For Alpha ( email )

35 boulevard d'Inkermann
Neuilly sur Seine, 92200
France

David Saltiel

Université Paris Dauphine ( email )

Place du Maréchal de Tassigny
Paris, Cedex 16 75775
France

A.I. Square Connect ( email )

35 Boulevard d'Inkermann
Neuilly sur Seine, 92200
France

AI For Alpha ( email )

35 boulevard d'Inkermann
Neuilly sur Seine, 92200
France

Serge Tabachnik

Lombard Odier Investment Managers ( email )

6, avenue des Morgines
Petit-Lancy, 1213
Switzerland

Sui Kai Wong

Lombard Odier Investment Management

3901, Two Exchange Square
8 Connaught Place
Hong Kong
Hong Kong

François Chareyron

Lombard Odier Investment Managers ( email )

Avenue des Morgines 6
Geneva, 1213
Switzerland

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
6,630
Abstract Views
80,290
rank
1,519
PlumX Metrics