Skip to main content
Autocalibration and Tweedie-Dominance for Insurance Pricing with Machine Learning
Boosting techniques and neural networks are particularly effective machine learning methods for insurance pricing.
Often in practice, there are nevertheless endless debates about the choice of the right loss function to be used to train the machine learning model, as well as about the appropriate metric to assess the performances of competing models. Also, the sum of fitted values can depart from the observed totals to a large extent and this often confuses actuarial analysts.
The lack of balance inherent to training models by minimizing deviance outside the familiar GLM with canonical link setting has been empirically documented in Wuthrich (2019, 2020) who attributes it to the early stopping rule in gradient descent methods for model fitting. This presentation aims to further study this phenomenon when learning proceeds by minimizing Tweedie deviance. It is shown that minimizing deviance involves a trade-off between the
integral of weighted differences of lower partial moments and the bias measured on a specific scale. Autocalibration is then proposed as a remedy. This new method to correct for bias adds an extra local GLM step to the analysis.
Date and Time
-
Additional Authors and Speakers (not including you)
Michel Denuit
Université de Louvain
Julien Trufin
Université Libre de Bruxelles
Language of Oral Presentation
Bilingual
Language of Visual Aids
Bilingual

Speaker

Edit Name Primary Affiliation
Arthur Charpentier Université du Québec à Montréal