Risk-Averse Reinforcement Learning for Algorithmic Trading

8 Pages Posted: 2 Dec 2013 Last revised: 25 Feb 2014

See all articles by Yun Shen

Yun Shen

Technische Universität Berlin (TU Berlin)

Ruihong Huang

Humboldt University of Berlin

Chang Yan

Humboldt University of Berlin

Klaus Obermayer

Technische Universität Berlin (TU Berlin)

Date Written: November 24, 2013

Abstract

We propose a general framework of risk-averse reinforcement learning for algorithmic trading. Our approach is tested in an experiment based on 1.5 years of millisecond time-scale limit order data from NASDAQ, which contain the data around the 2010 flash crash. The results show that our algorithm outperforms the risk-neutral reinforcement learning algorithm by 1) keeping the trading cost at a substantially low level at the spot when the flash crash happened, and 2) significantly reducing the risk over the whole test period.

Keywords: High-Frequency Trading, Limit Order Book, Optimal Execution, Machine Learning

JEL Classification: G17

Suggested Citation

Shen, Yun and Huang, Ruihong and Yan, Chang and Obermayer, Klaus, Risk-Averse Reinforcement Learning for Algorithmic Trading (November 24, 2013). Available at SSRN: https://ssrn.com/abstract=2361899 or http://dx.doi.org/10.2139/ssrn.2361899

Yun Shen (Contact Author)

Technische Universität Berlin (TU Berlin) ( email )

Marchstr. 23, 5050
Berlin, 10587
Germany

Ruihong Huang

Humboldt University of Berlin ( email )

Unter den Linden 6
Berlin, AK Berlin 10099
Germany

Chang Yan

Humboldt University of Berlin ( email )

Unter den Linden 6
Berlin, AK Berlin 10099
Germany

Klaus Obermayer

Technische Universität Berlin (TU Berlin) ( email )

Straße des 17
Juni 135
Berlin, 10623
Germany

Register to save articles to
your library

Register

Paper statistics

Downloads
472
Abstract Views
1,793
rank
59,163
PlumX Metrics