Developing Stochastic Loss Reserving Models

Mark Shoun
Ledger Investing
Published in
9 min readJan 24, 2019

In order to accurately price insurance-linked securities, we need more than simple point estimates of prospective loss ratios. The long-term value of any financial asset is a function of both its future expected value and the uncertainty around its future expected value. Investors expect to be compensated for taking on risk, so as the uncertainty around an asset’s future value increases, its present value decreases, even if the expected future value remains constant.

We at Ledger are deeply invested in the problem of forecasting insurance underwriting loss ratio distributions. Today we’re excited to announce the release of a new, internally-developed model that addresses a critical problem in loss ratio forecasting — loss development. Our new model offers several advantages over other such models in the actuarial literature.

Background on Loss Development

Insurance underwriting results are challenging in that they retain a high degree of uncertainty for a relatively long period of time. In most forms of property and casualty (P&C) insurance, insurance policies cover claims on the basis of when the loss occurred, not when it was reported. For example, an auto insurance policy that covers a vehicle from January 1, 2019 through June 30, 2019 would pay a bodily injury claim arising from an accident on May 1, 2019, even if the claimant did not contact the insurance company until February 1, 2020. Even when a claim has been reported, it may be months or years until the true cost of the claim is known with certainty, especially if the claim enters litigation or it involves long-term medical expenses.

This uncertainty makes determining the underwriting performance of an insurance company very difficult. When all claims associated with policies in effect in a given year are completely paid and settled, the loss ratio is known with certainty, and is called the “ultimate” loss ratio. However, it may take ten years or more for the ultimate loss ratio to emerge — an unacceptably long delay for stakeholders.

Thus, the actuarial community has developed a set of techniques for estimating ultimate loss ratios given present information. For example, on January 1st, 2019, we may wish to estimate ultimate loss ratios for 2018, 2017, 2016, and so forth. Obviously, the uncertainty in ultimate loss ratios is greater for more recent years.

This general problem is referred to in the industry as “loss development”. Many loss development models have been proposed in the actuarial literature over the past several decades. When viewed as black boxes, these models are very similar. The input to each model is historical data on loss payment patterns for a given company (commonly referred to as a “loss triangle”), and the output of the model is a point estimate or predictive distribution of that company’s ultimate paid losses for each accident year. Of course, this broad structural similarity masks a wide degree of variety in terms of how each model constructs the ultimate loss estimates.

Loss Development Models at Ledger Investing

We at Ledger Investing are keenly interested in the problem of loss development. Given the context in which we apply loss development models, we want models that are both accurate and unbiased — we care as much about the fidelity of the shape of a model’s predicted loss distribution as we do about the accuracy of its point estimates. We study and evaluate loss development models from the actuarial literature, as well as devising models of our own.

We recognize that every model has its unique strengths and weaknesses, and that no modeling approach is perfect. In the interest of transparency, our platform provides a variety of paid loss development models — some old and some new, some classical and some Bayesian, some from the actuarial literature and some developed by us internally. We allow users to easily compare predictions from several different models when forecasting the performance of an insurance portfolio, and for each model we provide a variety of performance metrics from a rigorous and extensive backtesting protocol.

Our New Loss Development Model: HLOS-v1

The loss development model we’re releasing today is called HLOS-v1 (pronounced H-loss), which stands for Hierarchical Linear Observed State (version 1). The full model has a lot of interesting features, but for now I’ll just highlight the few that are important enough to make it into the model name.

Hierarchical

In a Bayesian context, hierarchical modeling means that we treat some model parameters as coming from a distribution. This parameter distribution, in turn, has hyperparameters which are estimated from the data.

A quick example: let’s say we’re trying to estimate mean household income in each borough of New York City by surveying 100 residents of each borough. Our sample means by borough are $55K in the Bronx, $80K in Manhattan, $50K in Queens, $60K in Brooklyn, and $190K in Staten Island. For anyone familiar with New York City, these results seem suspicious. The Staten Island estimate is an obvious outlier, likely driven by one or a few very high incomes in the sample. A hierarchical model fit to this data would pull the Staten Island estimate down and pull the estimates of the other boroughs up.

This idea is not unique to Bayesian modeling. Hierarchical modeling (or functionally similar equivalents) are known as “credibility weighting” in actuarial science, as “random effects” in statistics and econometrics, and as “regularization” in machine learning and deep learning.

Hierarchical modeling is gaining currency in the actuarial literature, and was applied to loss reserve development as early as 2008. However, we distinguish ourselves here by applying hierarchical modeling at scale. Rather than fitting a hierarchical model to five or eight companies, we fit models for each line of business, and fit most U.S. companies active in that line of business simultaneously. For example, our Workers’ Compensation model uses 761 companies for calendar year (CY) 2016, and our Commercial Auto Liability model uses 941 companies for CY 2017.

The key advantage of hierarchical modeling is that it allows us to reduce the amount of noise in our model estimates, even when we fit models with a very large number of degrees of freedom per company.

Linear

Linearity normally isn’t something to brag about in the data science community — linear models are viewed as simple, obvious defaults. However, in a loss development context, linearity represents a significant improvement over many models currently in use.

Classic loss reserving models (such as chain-ladder and its derivatives) assume that there is a fixed average ratio between loss ratios at time t and time t+1 — in other words, that L(t+1) = b*L(t) for some value of b. Taken at face value, this may seem to be a reasonable assumption. However, this model is overly simplistic. If we think of L(t) as a predictor and L(t+1) as a response in a generic regression context, then we get the model L(t+1) = a + b*L(t). Notice the added variable a, which acts as an intercept! This highlights the fact that the chain-ladder model assumes a proportional relationship between L(t+1) and L(t). In general, linear relationships are more flexible and expressive than proportional relationships.

In general, when we fit linear models, we see that a > 0 and 0 < b < 1. Translated back into the insurance domain, this means that if we see initial paid losses for a given year are, say, 50% lower than another year’s initial paid losses, we expect that year’s ultimate paid losses will be lower, but not quite 50% lower.

The plot below shows a scatterplot of paid losses at development lag 0 against cumulative paid losses at lag 1 for a large sample of commercial auto insurers. The red line shows the best fitting proportional (no-intercept) model, and the blue line shows the best fitting linear model. Clearly, the blue line is a more reasonable assumption.

The actuarial community recognized this shortcoming of classic models as long ago as 1994. Some newer models such as GMCL do include intercept terms, but the popularity of intercept terms has been limited overall by the fact that introducing intercept terms requires estimating many more parameters, which is difficult to do with the small amounts of data contained in a single company’s loss triangle. Our large-scale hierarchical estimation allows us to neatly avoid this issue.

Observed State (version 1)

The last component of the model’s name corresponds to a conceptual shift on our part, rather than any direct change to the model’s structure. Many loss reserving models can be effectively framed as follows: given some model parameters estimated from a loss triangle and the most recent loss information for an accident year, project that accident year’s ultimate loss ratio.

We think that this conceptual framework, where the only relevant input is the most recent losses, is a crude and simplistic view of reality. It’s easy to think of other potential inputs that could impact loss development. A couple of examples:

  • For incurred losses, did the most recent development update bring the accident year losses higher or lower? We’ve seen empirical evidence that companies are conservative when adjusting loss reserves. If incurred losses are adjusted up or down one year, it’s likely they’ll be adjusted in the same direction again the next year.
  • What is the most recent loss information for the previous accident year? There is a strong degree of correlation between year-to-year loss ratios, and there is empirical evidence that including the previous accident year’s losses is a useful predictor.

We call our model State (version 1) because we think of the current status of an accident year’s loss payments as a state vector. In version 1, the state vector only has one element, but we’re already working on models with multiple elements in the state vector.

We call our model Observed State because every element in our state vectors can be directly computed from the data. In general, this need not be true — in principle, we could incorporate additional latent variables into the state vector. However, efficiently estimating these latent variables can be very challenging, and we are not actively investigating latent state models at present.

Model Performance

Discussions of the theoretical advantages of our model’s structure are all well and good, but the ultimate measure of a model is in its value as a tool for prediction. Here, we provide some preliminary results of model performance. The plot below shows out-of-sample root mean squared error (RMSE) between mean predicted ultimate loss ratios and true ultimate loss ratios. (For those readers who are not statistically inclined, this is essentially an unbiased measure of average prediction error.) Each color represents a different model, and each plot is a summary of a different statutory line of business. The x-axis is the number of observed years of development, and the y-axis is the RMSE.

As expected, average errors are higher for lower development lags — it’s hard to predict ultimate loss ratios given only one year’s development, but easy given eight year’s development. We see that our model (the red line) is generally at least competitive with the other models, and offers more of a competitive advantage for longer-tailed lines of business.

The plot below has a similar structure, except instead of model RMSE, it compares the Anderson-Darling test statistic, which is a measure of how well a statistical distribution matches reality. In general, lower values of the test statistic are better.

Here, we see that our model is usually the best on this metric, with the exceptions being earlier development lags for shorter-tailed lines of business.

But Wait, There’s More!

Of course, the description of the model I just provided is cursory and high-level. We’ll be posting a much more in-depth paper detailing our methodology to the Ledger Capital Markets platform. Most importantly, we’re rolling out predictions from the HLOS-v1 model to the Ledger Capital Markets platform over the next few days, so you can directly compare the model outputs to other models.

If you’d like to check out this model for yourself, Ledger Capital Markets is now accepting registration requests from Insurers and Investors for its Investment Platform. You can contact us directly here to receive your invitation. For more about Ledger Investing, please visit our website at ledgerinvesting.com.

--

--