FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance

9 Pages Posted: 8 Nov 2021

See all articles by Xiao-Yang Liu

Xiao-Yang Liu

Columbia University - Fu Foundation School of Engineering and Applied Science

Hongyang Yang

Columbia University - Department of Statistics

Jiechao Gao

University of Virginia - Department of Computer Science

Christina Wang

New York University (NYU) - New York University

Date Written: November 4, 2021

Abstract

Deep reinforcement learning (DRL) has been envisioned to have a competitive edge in quantitative finance. However, there is a steep development curve for quantitative traders to obtain an agent that automatically positions to win in the market, namely to decide where to trade, at what price and what quantity, due to the error-prone programming and arduous debugging. In this paper, we present the first open-source framework FinRL as a full pipeline to help quantitative traders overcome the steep learning curve. FinRL is featured with simplicity, applicability and extensibility under the key principles, full-stack framework, customization, reproducibility and hands-on tutoring.

Embodied as a three-layer architecture with modular structures, FinRL implements fine-tuned state-of-the-art DRL algorithms and common reward functions, while alleviating the debugging work- loads. Thus, we help users pipeline the strategy design at a high turnover rate. At multiple levels of time granularity, FinRL simu- lates various markets as training environments using historical data and live trading APIs. Being highly extensible, FinRL reserves a set of user-import interfaces and incorporates trading constraints such as market friction, market liquidity and investor’s risk-aversion. Moreover, serving as practitioners’ stepping stones, typical trad- ing tasks are provided as step-by-step tutorials, e.g., stock trading, portfolio allocation, cryptocurrency trading, etc.

Keywords: Deep reinforcement learning, automated trading, quantitative finance, Markov Decision Process, portfolio allocation.

Suggested Citation

Liu, Xiao-Yang and Yang, Hongyang and Gao, Jiechao and Wang, Christina, FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance (November 4, 2021). Available at SSRN: https://ssrn.com/abstract=3955949 or http://dx.doi.org/10.2139/ssrn.3955949

Xiao-Yang Liu

Columbia University - Fu Foundation School of Engineering and Applied Science ( email )

New York, NY
United States

Hongyang Yang (Contact Author)

Columbia University - Department of Statistics ( email )

Mail Code 4403
New York, NY 10027
United States

Jiechao Gao

University of Virginia - Department of Computer Science ( email )

151 Engineer's Way
P.O. Box 400740
Charlottesville, VA 22904-4740
United States

Christina Wang

New York University (NYU) - New York University

Bobst Library, E-resource Acquisitions
20 Cooper Square 3rd Floor
New York, NY 10003-711
United States

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
1,078
Abstract Views
2,412
Rank
31,230
PlumX Metrics