Nagging Predictors

Risks 2020, 8(3), 83;

Posted: 16 Jul 2020 Last revised: 22 Mar 2021

See all articles by Ronald Richman

Ronald Richman

Old Mutual Insure; University of the Witwatersrand

Mario V. Wuthrich

RiskLab, ETH Zurich

Date Written: June 15, 2020


We define the nagging predictor, which, instead of using bootstrapping to produce a series of i.i.d. predictors, exploits the randomness of neural network calibrations to provide a more stable and accurate predictor than is available from a single neural network run. Convergence results for the family of Tweedie's compound Poisson models, which are usually used for general insurance pricing, are provided. In the context of a French motor third-party liability insurance example, the nagging predictor achieves stability at portfolio level after about 20 runs. At a policy level, we show that for some policies up to 400 neural network runs are required to achieve stability. Since working with 400 neural networks is impractical, we calibrate two meta models to the nagging predictor, one unweighted, and one using the coefficient of variation of the nagging predictor as a weight, finding that these latter meta networks can approximate the nagging predictor well, only with a small loss of accuracy.

Keywords: bagging, neural networks, network aggregation, insurance pricing, regression modeling

JEL Classification: G22, C02, C13, C45

Suggested Citation

Richman, Ronald and Wuthrich, Mario V., Nagging Predictors (June 15, 2020). Risks 2020, 8(3), 83; , Available at SSRN: or

Ronald Richman

Old Mutual Insure ( email )

Wanooka Place
St Andrews Road
Johannesburg, 2192
South Africa

University of the Witwatersrand ( email )

1 Jan Smuts Avenue
Johannesburg, GA Gauteng 2000
South Africa

Mario V. Wuthrich (Contact Author)

RiskLab, ETH Zurich ( email )

Department of Mathematics
Ramistrasse 101
Zurich, 8092

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics