Validity of Climate Change Forecasting for Public Policy Decision Making
9 Pages Posted: 9 Aug 2010
Date Written: February 24, 2009
Abstract
Policymakers need to know whether prediction is possible and if so whether any proposed forecasting method will provide forecasts that are substantively more accurate than those from the relevant benchmark method. Inspection of global temperature data suggests that it is subject to irregular variations on all relevant time scales and that variations during the late 1900s were not unusual. In such a situation, a “no change” extrapolation is an appropriate benchmark forecasting method. We used the U.K. Met Office Hadley Centre’s annual average thermometer data from 1850 through 2007 to examine the performance of the benchmark method. The accuracy of forecasts from the benchmark is such that even perfect forecasts would be unlikely to help policymakers. For example, mean absolute errors for 20- and 50-year horizons were 0.18°C and 0.24°C. We nevertheless demonstrate the use of benchmarking with the example of the Intergovernmental Panel on Climate Change’s 1992 linear projection of long-term warming at a rate of 0.03°C-per-year. The small sample of errors from ex ante projections at 0.03°C-per-year for 1992 through 2008 was practically indistinguishable from the benchmark errors. Validation for long-term forecasting, however, requires a much longer horizon. Again using the IPCC warming rate for our demonstration, we projected the rate successively over a period analogous to that envisaged in their scenario of exponential CO2 growth – the years 1851 to 1975. The errors from the projections were more than seven times greater than the errors from the benchmark method. Relative errors were larger for longer forecast horizons. Our validation exercise illustrates the importance of determining whether it is possible to obtain forecasts that are more useful than those from a simple benchmark before making expensive policy decisions.
Keywords: climate model, ex ante forecasts, out-of-sample errors, predictability, public policy, relative absolute errors, unconditional forecasts
Suggested Citation: Suggested Citation
Do you have a job opening that you would like to promote on SSRN?
Recommended Papers
-
By Fred Collopy and J. Scott Armstrong
-
Error Measures for Generalizing About Forecasting Methods: Empirical Comparisons
By J. Scott Armstrong and Fred Collopy
-
By J. Scott Armstrong and Robert Carbone
-
By J. Scott Armstrong and Fred Collopy
-
Causal Forces: Structuring Knowledge for Time-Series Extrapolation
By J. Scott Armstrong and Fred Collopy
-
Expert Opinions About Extrapolation and the Mystery of the Overlooked Discontinuities
By Fred Collopy and J. Scott Armstrong
-
Beyond Accuracy: Comparison of Criteria Used to Select Forecasting Methods
By Thomas Yokum and J. Scott Armstrong
-
Structuring Knowledge Retrieval: An Analysis of Decomposed Quantitative Judgments
By Donald G. Macgregor, Sarah Lichtenstein, ...