Skip to content

Large Events & Anamolies

This section focuses on handling infrequent and high variance events, such as large losses or catastrophes. These events typically require more data to project expected losses accurately.

Balancing Stability with Responsiveness

The core actuarial theme here is balancing stability with responsiveness. Because these events don't occur frequently, looking at the latest 50 years of data might be necessary to provide a stable loss estimate. However, using data from 50 years ago might not be responsive enough, as conditions and relevance can change significantly over such a long period.


The Big Picture: Estimating Future Losses with Anomalies

We aim to estimate future losses using historical data. However, this data can contain anomalies:

  • Large (Shock) Losses:
    • The definition varies by business.
    • These are extremely large losses relative to the size of the business written (e.g., a $2M loss is significant for a $10M book of business, but less so for a $1B book).
  • Catastrophe:
    • According to ISO, a catastrophe involves $25M of industry losses from an event and a large number of claims.
  • Challenge of Large Losses (Example):
    • Consider a true scenario where most years have $0.5M in losses, but there's a 10% chance of $2M.
    • If our recent three years all had $2M (or none did), we can't get a good idea of the true expected loss, leading to overestimation or underestimation.
  • What if we don't adjust for them?
    • We'll overestimate during "unlucky" years and underestimate during "lucky" years.
  • What's the Goal?
    • To get the best estimate of the expected value, as we can never predict when an earthquake will strike.
    • Rates should cover these costs over long periods and shouldn't over-react to lucky or unlucky years.

How to Adjust for Shock Losses?

When adjusting for shock losses, consider the following:

Common Options:

(Jargon: non-excess losses + excess losses = ground-up losses)

  • Cap Losses at Basic Limit:
    • This approach helps in calculating an expected loss for losses under the basic limit.
    • Rates for losses above the basic limit must be derived separately.
    • Historical premiums also need adjustment to basic limit rates, reflecting what premiums would have been collected if all policies were written with basic limits.
  • Cap Losses and Apply an Excess Loss Loading Factor:
    • This is common for property coverages.
    • Losses are capped at a specific amount.
    • A factor is applied to non-excess losses to account for the excess losses or "load for the excess loss."
  • Remove Ground-Up Shock Losses and Apply a Shock Loss Loading:
    • This method is less common.
    • It requires defining a "shock loss" and removing it instead of capping it.

Challenge:

If using the last two methods, a shock loss threshold needs to be defined.

Goals:

  • Include as many non-excess/non-shock losses as possible (a lower cap includes fewer non-excess losses).
  • Minimize the volatility of non-excess/non-shock losses (a higher cap increases loss volatility).

Choices for the "Cap":

  • Basic Policy Limits (assuming policies have limits):
    • This doesn't work for Workers' Compensation, where the goal is full recovery and return to work.
  • Actuarial Judgment:
    • Ask: "Above which level do the losses become volatile?"
  • Percentile of the Size of Loss Distribution:
    • For example, sort all losses and cap them at the 99th percentile loss amount.
  • Loss as a Percent of Insured Value:

Choices for the "Excess-loss/Shock-loss Loading Factor":

  • Long-Term Average of Excess/Non-Excess:
    • Apply a 20-30 year excess/non-excess loading factor to 3-year non-excess losses.
  • How many years of data?
    • Balance the stability of the average ratio with responsiveness to changes.
  • Account for changes in average severity over time?
    • Trended Excess-Losses / Trended Non-Excess Losses: Trend historical losses to future policy cost levels.
    • Vary Cap by Year (indexed to year): Since each year will have a different cost level.
    • See also: Account for inflation or changes in average severity over time (trends in the excess layer are greater than in the non-excess layer).
  • #datascience Fit a statistical distribution to loss data and run a simulation.

Calculations for Shock Losses

The intuition here is to find a factor that, when multiplied by non-excess losses, gives the correct expected amount.

Example: Calculation for Excess Loss Loading

Here, a shock loss is defined as a single loss greater than $1,200,000.

  • Sum across the total excess and total non-excess loss and take the ratio (weighted ratio).
  • Alternatively, take a straight average of the "Excess Ratio" column.
  • Multiply this Excess Loading Factor for expected excess losses.
    • # of Excess Claims: Count claims exceeding $1,200,000.
    • Ground-Up Excess Losses: Sum the size of those shock losses for each Accident Year (AY).
    • Losses Excess of $120k: How much of the loss is considered excess? Sum for each AY.
    • Non-Excess Losses: Total reported losses minus excess losses.
    • Excess Ratio: Compute the ratio: Excess losses / Non-excess losses.

Example: Calculation for Removal of Ground-Up Shock Loss

  • Note the difference: This focuses on "shock losses" directly, not just "excess losses."
  • When you multiply the shock ratio by non-shock losses, you get an estimated value for the expected losses (which would include shocks).
    • # of Shock Losses: Count claims exceeding $1,200,000.
    • Ground-Up Shock Losses: Sum the size of those shock losses for each AY.
    • Non-Shock Losses: Total reported losses minus shock losses.
    • Shock Ratio: Compute the ratio: Ground-up shock losses / Non-shock losses.

How to Adjust for Catastrophic Losses?

Catastrophic losses are treated like "shock" losses and adjusted for on a ground-up basis, rather than as "excess" losses.

Approach:

Remove all catastrophic losses from the data and replace them with a Catastrophe Loading Factor (Cat LF).

Two Components:

  • Modelled (infrequent):
    • Computer simulations where the insurer uploads their current book of business.
    • These models generate loss distributions and expected losses for events where historical data is insufficient (e.g., hurricanes). The model might simulate a hurricane's impact on insured properties based on geographical outcomes to inform the dataset.
  • Non-modelled #tip (this is the only one tested):
    • This approach doesn't use a catastrophe model and covers events that occur relatively frequently (e.g., hailstorms).
    • Long-term 20-30 year averages are often sufficient.
    • Consider exposure growth in catastrophe-prone areas: If a previously less populated area prone to hailstorms has seen increased population, the same magnitude hailstorm could cause significantly more damage now.
    • Relevance of old data: If building codes have changed, older data is less relevant. Always balance stability with responsiveness (relevance).

Goal:

To establish a rate that, over 50 years, will cover losses from a 1-in-50-year earthquake, rather than attempting to predict the earthquake and increasing rates only for affected insureds in that specific year (which would be impractical).

Non-Pricing Measures:

  • Limit catastrophe risks:
    • Restrict writings in high-risk areas.
    • Require high deductibles in high-risk areas (e.g., 10% of building value, so a $500k house has a $100k deductible).

Calculation for Non-Modelled Catastrophic Losses

When calculating for non-modelled catastrophic losses:

  • 20 years of data is generally considered sufficient.
  • Consider the Amount of Insurance.
  • Take the ratio of Cat-to-AIY (then take the straight average):
    • Since losses and Amount of Insurance Year (AIY) will both change consistently due to inflation, taking the Cat-to-AIY ratio for each year ensures that the inflation factor cancels out. This ratio will be comparable across different years, as both the numerator and denominator will be at consistent inflationary levels.
  • Consider the ULAE factor (Unallocated Loss Adjustment Expenses / Loss & Allocated Loss Adjustment Expenses only).
  • Multiply with the future Average AIY per exposure in the Effective Period.

Expense Anomalies

We've discussed loss data, but what about other types of data like premiums or expenses?

  • Example: An unusually large expense for an insurer, such as buying a new computer system or software.

What to do?

  • While accounting rules dictate financial reporting, actuaries can make their own assumptions for pricing.

Approach:

  • Smooth out like shock losses.
  • Don't price for this at all: Assume it's paid from surplus, like a sunk cost.

Consideration:

  • Consider the nature of the expense. Ask: Should future policyholders bear this cost?

Comments