High-Frequency Trading Meets Reinforcement Learning: Exploiting the Iterative Nature of Trading Algorithms

28 Pages Posted: 16 Apr 2015 Last revised: 9 Jul 2015

See all articles by Joaquin Fernandez-Tapia

Joaquin Fernandez-Tapia

Tradelab; Laboratoire de Probabilites et Modeles Aleatoires

Date Written: July 9, 2015

Abstract

We propose an optimization framework for market-making in a limit-order book, based on the theory of stochastic approximation. We consider a discrete-time variant of the Avellaneda-Stoikov model similar to its development in an article of Laruelle, Lehalle and Pagès in the context of optimal liquidation tactics. The idea is to take advantage of the iterative nature of the process of updating bid and ask quotes in order to make the algorithm optimize its strategy on a trial-and-error basis (i.e. on-line learning). An advantage of this approach is that the exploration of the system by the algorithm is performed in run-time, so explicit specifications of the price dynamics are not necessary, as is the case in the stochastic-control approach. As it will be discussed, the rationale of our method can be extended to a wider class of algorithmic-trading tactical problems other than market-making.

Keywords: High-frequency trading, algorithmic trading, market-making, on-line learning, stochastic optimization

Suggested Citation

Fernandez-Tapia, Joaquin and Fernandez-Tapia, Joaquin, High-Frequency Trading Meets Reinforcement Learning: Exploiting the Iterative Nature of Trading Algorithms (July 9, 2015). Available at SSRN: https://ssrn.com/abstract=2594477 or http://dx.doi.org/10.2139/ssrn.2594477

Joaquin Fernandez-Tapia (Contact Author)

Laboratoire de Probabilites et Modeles Aleatoires ( email )

175 rue du Chevaleret
Paris, 75013
France

Tradelab ( email )

73 Rue d'Anjou
Pairis, 75008
France

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
1,824
Abstract Views
6,050
Rank
18,762
PlumX Metrics