Measuring and Adjusting for Overconfidence

Posted: 28 Jun 2012

See all articles by Peter Schanbacher

Peter Schanbacher

University of Konstanz - Faculty of Economics and Statistics

Date Written: June 27, 2012

Abstract

To evaluate density forecasts the applied scoring rule is often arbitrarily chosen. The selection of the scoring rule strongly influences the ranking of forecasts. This paper identifies overconfidence as the main driver for scoring differences. A novel approach to measure overconfidence is proposed. Based on a non-proper scoring rule the forecasts can be individually adjusted towards a calibrated forecast. Applying the adjustment procedure to the Survey of Professional Forecasters it can be shown that out-of-sample forecasts can be significantly improved. Also the ranking of the adjusted forecasts becomes less sensitive to the selection of scoring rules.

Keywords: belief measurement, proper scoring rules, overconfidence, probability adjustment

JEL Classification: C53, E37, D81

Suggested Citation

Schanbacher, Peter, Measuring and Adjusting for Overconfidence (June 27, 2012). Available at SSRN: https://ssrn.com/abstract=2094384 or http://dx.doi.org/10.2139/ssrn.2094384

Peter Schanbacher (Contact Author)

University of Konstanz - Faculty of Economics and Statistics ( email )

Universitaetsstr. 10
78457 Konstanz
Germany

Register to save articles to
your library

Register

Paper statistics

Abstract Views
503
PlumX Metrics