Reinforcement Learning for Continuous-Time Optimal Execution: Actor-Critic Algorithm and Error Analysis

50 Pages Posted: 10 Mar 2023 Last revised: 8 Feb 2024

See all articles by Boyu Wang

Boyu Wang

The Chinese University of Hong Kong (CUHK) - Department of Systems Engineering & Engineering Management

Xuefeng Gao

The Chinese University of Hong Kong (CUHK) - Department of Systems Engineering & Engineering Management

Lingfei Li

The Chinese University of Hong Kong

Date Written: March 6, 2023

Abstract

We propose an actor-critic reinforcement learning (RL) algorithm for the optimal execution problem. We consider the celebrated Almgren-Chriss model in continuous time and formulate a relaxed stochastic control problem for execution under an entropy regularized mean-quadratic variation objective. We obtain in closed form the optimal value function and the optimal feedback policy, which is Gaussian. We then utilize these analytical results to parametrize our value function and control policy for RL. While standard actor-critic RL algorithms perform policy evaluation update and policy gradient update alternately, we introduce a recalibration step in addition to these two updates, which turns out to be critical for convergence. We develop a finite-time error analysis of our algorithm and show that it converges linearly under suitable conditions on the learning rates. We test our algorithm in three different types of market simulators built on the Almgren-Chriss model, historical data of order flow, and a stochastic model of limit order books. Empirical results demonstrate the advantages of our algorithm over the classical control method and a deep learning based RL algorithm.

Keywords: reinforcement learning, optimal execution, stochastic control, actor-critic method, finite-time error analysis, convergence analysis

JEL Classification: C45, C61, G19

Suggested Citation

Wang, Boyu and Gao, Xuefeng and Li, Lingfei, Reinforcement Learning for Continuous-Time Optimal Execution: Actor-Critic Algorithm and Error Analysis (March 6, 2023). Available at SSRN: https://ssrn.com/abstract=4378950 or http://dx.doi.org/10.2139/ssrn.4378950

Boyu Wang

The Chinese University of Hong Kong (CUHK) - Department of Systems Engineering & Engineering Management

Xuefeng Gao

The Chinese University of Hong Kong (CUHK) - Department of Systems Engineering & Engineering Management ( email )

Shatin, New Territories
Hong Kong

Lingfei Li (Contact Author)

The Chinese University of Hong Kong ( email )

Shatin, New Territories
Hong Kong

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
383
Abstract Views
1,806
Rank
143,657
PlumX Metrics