Robust Risk-Aware Reinforcement Learning
13 Pages Posted: 27 Aug 2021 Last revised: 15 Dec 2021
Date Written: December 14, 2021
We present a reinforcement learning (RL) approach for robust optimisation of risk-aware performance criteria. To allow agents to express a wide variety of risk-reward profiles, we assess the value of a policy using rank dependent expected utility (RDEU). RDEU allows the agent to seek gains, while simultaneously protecting themselves against downside events. To robustify optimal policies against model uncertainty, we assess a policy not by its distribution, but rather, by the worst possible distribution that lies within a Wasserstein ball around it. Thus, our problem formulation may be viewed as an actor choosing a policy (the outer problem), and the adversary then acting to worsen the performance of that strategy (the inner problem). We develop explicit policy gradient formulae for the inner and outer problems, and show its efficacy on three prototypical financial problems: robust portfolio allocation, optimising a benchmark, and statistical arbitrage.
Keywords: Robust Optimisation, Reinforcement Learning, Risk Measures, Wasserstein Distance, Statistical Arbitrage, Portfolio Optimisation
JEL Classification: C61, G11,C63, C15, C44
Suggested Citation: Suggested Citation