Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy
Risk Analysis, 37, 1532-1549.
34 Pages Posted: 26 May 2016 Last revised: 27 Sep 2017
Date Written: April 22, 2016
Investors interested in the global financial market have to analyze financial securities internationally. The optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This paper investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach.
Keywords: High-Dimensional Portfolio Selection, Continuous-Time Mean-Variance Portfolio, Constant-Rebalancing Portfolio, Machine Learning, Constrained $\ell_1$ Minimization, Sparse Portfolio
JEL Classification: C44, C61, G11
Suggested Citation: Suggested Citation