39 Pages Posted: 24 Mar 2014
Date Written: March 22, 2014
We consider estimating volatility risk factors using large panels of filtered or realized volatilities. The data structure involves three types of asymptotic expansions. There is the cross-section of volatility estimates at each point in time, namely i = 1,...; N observed at dates t = 1;....., T. In addition to expanding N and T, we also have the sampling frequency h of the data used to compute the volatility estimates which rely on data collected at increasing frequency, h going to 0. The continuous record or in-fill asymptotics allows us to control the cross-sectional and serial correlation among the idiosyncratic errors of the panel. A remarkable result emerges. Under suitable regularity conditions the traditional principal component analysis yields super-consistent estimates of the factors at each point in time. Namely, contrary to the root-N standard normal consistency we find N-consistency, also standard normal, due to the fact that the high frequency sampling scheme is tied to the size of the cross-section, boosting the rate of convergence. We also show that standard cross-sectional driven criteria suffice for consistent estimation of the number of factors, which is different from the traditional panel data results. Finally, we also show that the panel data estimates improve upon the individual volatility estimates.
Keywords: Principal Component Analysis, ARCH-type filters, realized volatility
JEL Classification: C13, C33
Suggested Citation: Suggested Citation