Methods for Computing Numerical Standard Errors: Review and Application to Value-at-Risk Estimation
Journal of Time Series Econometrics, Vol. 10, No. 2, pp.1-9, 2018
14 Pages Posted: 7 Mar 2016 Last revised: 3 Aug 2018
Date Written: August 22, 2017
Abstract
Numerical standard error (NSE) is an estimate of the standard deviation of a simulation result if the simulation experiment were to be repeated many times. We review standard methods for computing NSE, and perform a Monte Carlo experiments to compare their performance in the case of high/extreme autocorrelation. In particular, we propose an application to risk management where we assess the precision of the Value–at–Risk measure when the underlying risk model is estimated by simulation–based methods. Overall, HAC estimators with prewhitening perform best in the presence of large/extreme autocorrelation.
Keywords: Bootstrap, GARCH, HAC kernel, numerical standard error (NSE), Monte Carlo, Markov chain Monte Carlo (MCMC), spectral density, Value-at-Risk precision
JEL Classification: C12, C15, C22
Suggested Citation: Suggested Citation