Adaptive Robust Control in Continuous-Time
28 Pages Posted: 5 May 2020 Last revised: 7 May 2020
Date Written: April 9, 2020
We propose a continuous-time version of the adaptive robust methodology introduced in Bielecki et al. (2019). An agent solves a stochastic control problem where the underlying uncertainty follows a jump-diffusion process and the agent does not know the drift parameters of the process. The agent considers a set of alternative measures to make the control problem robust to model misspecification and employs a continuous-time estimator to learn the value of the unknown parameters to make the control problem adaptive to the arrival of new information. We use measurable selection theorems to prove the dynamic programming principle of the adaptive robust problem and show that the value function of the agent is characterised by a non-linear partial differential equation. As an example, we derive in closed-form the optimal adaptive robust strategy for an agent who acquires a large number of shares in an order-driven market and illustrates the financial performance of the execution strategy.
Keywords: adaptive robust control, model uncertainty, stochastic control, time-consistency, dynamic programming, optimal acquisition, online learning, algorithmic trading
Suggested Citation: Suggested Citation