Deloitte's Andrew Smith and Seema Thaper take part in an exclusive Actuarial Post Q&A about Stochastic Modelling, and, in particular, stochastic claims reserving.
How does stochastic claims reserving compare to more traditional methods?
All reserving involves a combination of statistical methodology and expert knowledge of the business. Stochastic reserving is no different in this regard; the difference is that the output is a probability distribution rather than a point estimate. This helps to embed the uncertainty around the estimated reserves into the core reserving process.
How prevalent are stochastic claims reserving methods in the industry today?
Whilst stochastic methods have become commonplace in capital modelling, in reserving deterministic methods continue to be the norm when determining an actuarial best estimate. Reserving actuaries mainly use the stochastic methods for reserve uncertainty calculations.
There is a variety of different stochastic methods to choose between. Some of the most commonly used are:
Bootstrap method
The Bootstrap method breaks claim development factors down into two components: an underlying pattern and random noise. The random noise is assumed to have the same distribution at all points in the triangle, so that given a historic triangle we can produce alternative outcomes by randomly shuffling the noise within the triangle. Sorting the results by size allows the reserving actuary to derive a distribution and so estimate the range of possible ultimate losses within a given probability.
Mack method
In contrast to the Bootstrap method, the Mack method provides a way of assessing the mean and variance of chain ladder reserve estimates without assuming any specific distribution of claim amounts. Mack proposed a method for fitting normal and lognormal distributions to the mean and standard error of reserve estimates for each origin year so that a full distribution of ultimate claim reserves can be derived.
Merz/ Wüthrich method
As is the case with the Mack method, the Merz/Wüthrich method assumes that the claims development process satisfies the assumptions of the distribution-free chain-ladder model. The added benefit of this method is that it provides a way of estimating the uncertainty in claims development result over one-year time horizon, which is what is required for Solvency II.
What challenges does stochastic modelling present to actuaries?
Currently, stochastic reserving techniques are out of balance because the statistical methodology has moved ahead of business’ ability to provide input. Underwriters and managers have a legitimate concern that stochastic reserving “black boxes” can produce illogical results out of the blue with no user-friendly inputs where business experts can put the model back on track. As a result, one of the main areas of challenge for actuaries is around building the trust of stakeholders in the output of the models, and ensuring the modelling process provides sufficient opportunity to capture expert input.
Stochastic modelling techniques require the availability of data and it is often the case that sufficient data is not available to make stochastic modelling viable. Along with complexity and inflexibility in terms of inputting expert judgements into models, this is one of the reasons cited by some reserving actuaries for continuing to rely on deterministic methods.
Another key challenge is around the articulation of findings as well as the techniques to management and key stakeholders. With the advent of Solvency II, articulation of these techniques has improved particularly within the context of Capital Modelling and articulating reserve risk.
Critics have complained in the past that actuaries make models too complex – is stochastic modelling evidence of this?
At the root of this complaint is the combination of reserving actuaries being required to model highly complex systems and then struggling to articulate the output in simple terms. This does not necessarily mean that actuaries are introducing complexity for the sake of it, and in many cases the explicit distributions used in stochastic reserving make it easier to point to the assumptions underlying reserving calculations when explaining the uncertainty. That said, stochastic models are certainly more complicated to use and explain than the more straightforward deterministic models
One example of the disconnect between business and statistics comes with the concept of a “range of best estimates”. The inherent uncertainty within reserving means that the reserving actuary does not know which model is right and different experts have different views. So, whilst a reserving actuary has to pick a number, they should not ignore the risk that one of the other experts was right. The same issue applies in stochastic reserving – focus on one stochastic model should not preclude consideration that other models might turn out to describe the claims process better.
What are the weaknesses of stochastic modelling?
Each of the methods outlined above have their own strengths and weaknesses, and it is a matter of judgment as to which is suitable in different circumstances.
However, there are more generic issues with stochastic models. Many stochastic reserving methods fall into the trap of starting with a single “true” best estimate today, with uncertainty expanding thereafter in a funnel of doubt according to a single stochastic model. However, this approach is missing a mechanism for capturing the expert opinions that were dismissed in deriving the best estimate.
There is a relatively simple test for assessing the suitability of stochastic reserving methods.
1. Take a large collection of historic triangles (ideally hundreds) and cut them off at a point 5 years in the past.
2. Fit stochastic models to what remains of each triangle and construct 95% intervals for future claims.
3. Examine the last 5 years’ experience and, across all the triangles, count how many times the claims fell within the 95% confidence interval.
By the very definition of a 95% interval, the answer should be 19/20, but in practice that is not what happens. In fact, using a bootstrap or Mack method, you are doing well if half the observations fall within the 95% confidence interval.
In addition, and possibly even worse, the models even fail under their own terms. If you generate random data from the underlying stochastic model, then claims fall inside the estimated 95% confidence interval about 3/4 of the time. The reason is that the models ignore parameter error; the typical “surprise” arises when the history of a triangle is unusually smooth and so the volatility parameters are underestimated.
Similar model risk arises in the estimation of diversification credit. Measures of historic correlation between different lines of business often return correlations close to zero, which would imply that diversification is very effective in reducing risk. Such measures can overlook common causes, such as a methodological flaw or unduly favourable assumption that has been applied to many classes of business. Model reviewers are generally aware of this problem, and usually challenge claims for diversification credit that seem particularly ambitious. Modellers may anticipate this by using higher correlation assumptions; a ploy which deflects the challenge but fails to address the underlying methodological weakness.
Is there any way to get around these weaknesses?
There are some answers in the field of robust statistics. When, as in most cases, the actuary does not know what the “right” model is, they need to set a wide enough range that have at least the advertised probability of containing the claims outcome, across a set of possible models and parameters. Together with each outcome, the model needs to produce a trajectory which could get to that point. Expert judgement can be applied at several points, including the selection of data, the initial range of models to be considered, and any models excluded from that set.
Is there anywhere that stochastic models succeed where deterministic models otherwise failed?
The advantage of stochastic models is that they make uncertainty explicit, and make it easy to talk quantitatively about ranges and likely outcomes. However, like any model, the output is a consequence of the assumptions you put into it. Reserving practitioners have always sought information outside the basic triangles, such as changes in premium rating, strength of case reserves or applicable inflation, in order to improve forecasting accuracy. In contrast, some of the stochastic models in current use are inflexible in the inputs data, and information in non-standard formats may be difficult to incorporate. The quality of inputs is critical to any model; the outputs are not inherently more reliable just because they are called “stochastic” or use a complicated methodology that nobody understands.
Are there circumstances in which stochastic modelling may not be the right tool?
Please see previous answers for limitations of stochastic modelling as well as challenges around stochastic modelling. It is important to note that as with any model there are instances, such as where data is insufficient, which may mean that the model is not the right tool.
How has stochastic modelling changed the actuarial modelling landscape?
In contrast to the world of capital modelling, where the use of stochastic model has become commonplace, reserving actuaries are still largely dependent on deterministic models such as chain ladder or Bornhuetter-Ferguson. In many insurers, the introduction of ICAS and more recently the push towards compliance with Solvency II will likely lead to greater integration of the reserving and capital modelling functions, as reserve risk is a key component of the capital model. Stochastic modelling and articulation of reserve uncertainty is certainly one of the main areas of research and activity for General Insurance Actuaries with a number of working parties currently focussing on this area.
By Andrew Smith, partner in Deloitte’s actuarial practice & Seema Thaper, senior manager in Deloitte’s actuarial practice
|