Causal Reinforcement Learning: An Instrumental Variable Approach

96 Pages Posted: 25 Feb 2021

See all articles by Jin Li

Jin Li

Faculty of Business and Economics, The University of Hong Kong

Ye Luo

Faculty of Business and Economics, The University of Hong Kong

Xiaowei Zhang

Faculty of Business and Economics, The University of Hong Kong

Date Written: February 25, 2021

Abstract

In the standard data analysis framework, data is first collected (once for all), and then data analysis is carried out. Moreover, the data-generating process is typically assumed to be exogenous. This approach is natural when the data analyst has no impact on how the data is generated. The advancement of digital technology, however, has facilitated firms to learn from data and make decisions at the same time. As these decisions generate new data, the data analyst---a business manager or an algorithm---also becomes the data generator. In this article, we formulate the problem as a Markov decision process (MDP) and show that the interaction generates a new type of bias---reinforcement bias---that exacerbates the endogeneity problem in static data analysis. When the data are independent and identically distributed, we embed the instrumental variable (IV) approach in the stochastic gradient descent algorithm to correct for the bias. For general MDP problems, we propose a class of IV-based reinforcement learning (RL) algorithms to correct for the bias. We establish asymptotic properties of the algorithms by incorporating them into two-timescale stochastic approximation (SA). Our formulation requires unbounded state space and more importantly, Markovian noise. Therefore, standard techniques in RL and SA literature, which rely on boundedness of the state space and martingale-difference structure of the noise, do not apply. We develop new techniques to establish finite-time risk bounds, finite-time bounds for trajectory stability, and asymptotic distribution of a class of IV-RL algorithms.

Keywords: Endogeneity, Markov Decision Process, Instrumental Variable, Reinforcement Bias, Reinforcement Learning, Q-Learning, Stochastic Approximation

Suggested Citation

Li, Jin and Luo, Ye and Zhang, Xiaowei, Causal Reinforcement Learning: An Instrumental Variable Approach (February 25, 2021). Available at SSRN: https://ssrn.com/abstract=3792824 or http://dx.doi.org/10.2139/ssrn.3792824

Jin Li

Faculty of Business and Economics, The University of Hong Kong ( email )

Pokfulam Road
Hong Kong, Pokfulam HK
Hong Kong

Ye Luo

Faculty of Business and Economics, The University of Hong Kong ( email )

Hong Kong

Xiaowei Zhang (Contact Author)

Faculty of Business and Economics, The University of Hong Kong ( email )

Pokfulam Road
Hong Kong
Hong Kong

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
69
Abstract Views
276
rank
401,532
PlumX Metrics