Skip to main content
Catastrophe Models (Property)

Last Updated 3/30/2022

Issue:  A catastrophe model (or “cat” model) is a computerized process that simulates potential catastrophic events and estimates the amount of loss due to the events.  Catastrophe models have been rapidly evolving since their introduction in the 1980s. Technological advances and higher resolution exposure data have accelerated this evolution in recent years. Catastrophe models incorporate advancing technology, scientific insight, engineering methods, and statistical data analysis to model complex scenarios and events. Recent advancements in a variety of fields have provided more accurate risk assessment as extreme weather events and other perils become increasingly prevalent. While many catastrophe models are focused on extreme weather events, catastrophe models can be used to minimize loss events from other crises and situations, such as warfare, terrorism, insurrection, cyber breaches, etc. They also have a wide range of applicability and can be used to analyze aspects of different lines of business, ranging from property and casualty to product liability.

Cat Model Basics: Cat models are designed to quantify the financial impact of a range of potential future disasters. They are intended to inform users on where future events are likely to occur and how intense they are likely to be. Based on the estimated probability of loss, they can estimate a range of direct, indirect, and residual losses. Direct losses result from incidents such as damage to physical structures and contents, deaths, and injuries. Examples of indirect loss are loss of use, additional living expenses, and business interruption. Sources of residual loss include demand surge, labor delays, and inflation in material costs.

There are four basic modules to all cat models, regardless of the peril being modeled. These modules are event, hazard/intensity, vulnerability, and financial. Depending on the source, these modules’ names can slightly vary, but the underlying function of the modules remains the same.

  • Event Module: The event module generates thousands of possible stochastic (random) event scenarios based on historical data and parameters that attempt to represent reality.
  • Hazard or Intensity Module: The intensity module determines the level of physical hazard specific to geographical locations using the location-specific risk characteristics for each simulated event.
  • Vulnerability Module: The vulnerability module quantifies the expected damage from an event conditioned upon the exposure characteristics and event intensity.
  • Financial Module: The financial module measures monetary loss from the damage estimates. Insured loss estimates are generated for different policy conditions, such as deductibles, limits, and attachment points. Varying financial perspectives, such as primary insurance or reinsurance treaties, are also provided.

Each of the modules makes assumptions about the parameters of the model. Assumptions are made either about specific values of the parameters (deterministic) or the probability distribution of parameters (stochastic). There are many parameters whose true value (or distribution) cannot be explicitly determined. The assumptions and parameter values/distributions are subjected to various statistical tests. In these instances, when there is no way to ascertain whether the parameters or distributions are correct, the models incorporate informed judgement and subjectivity.

Key metrics provided by a probabilistic catastrophe model include the Exceedance Probability (EP) curve, the Probable Maximum Loss (PML), and the average annual loss (AAL). EP is the likelihood that a loss greater than or equal to a determined amount will occur in the coming year. The PML is the annual probability a certain loss threshold is exceeded. For example, the 250-year PML represents the 99.6 percentile of the annual loss distribution. The AAL is the average loss of the entire loss distribution and is represented as the area under the EP curve. It is frequently used in pricing and ratemaking to evaluate the catastrophic load.

Background: Hurricane Andrew brought unprecedented losses after two decades of little hurricane activity. This changed the insurance industry’s perception of hurricanes and led to the industry adoption of catastrophe modeling. Since this time, cat models have continued to evolve to reflect better understanding of the underlying assumptions, intricacies, and science of a peril and its loss drivers. Model advancements are frequently driven by events revealing deficiencies in a model.

The 2004 and 2005 Atlantic hurricane seasons had a substantial impact on modeling assumptions. Two consecutive years of record activity and losses brought a new focus on the impact of aggregate losses from multiple hurricanes. Unique to prior hurricanes, Katrina in 2005 resulted in more losses from secondary flooding than the original wind generated catastrophe. As such, modelers also began to incorporate the impact of secondary losses from super catastrophes into their models.

Hurricanes since Katrina have highlighted the impact additional factors can have on losses, such as demand surge, evacuation, sociological risks, and political influence. Models are increasingly using combinations of economic and sociological modeling to incorporate loss amplification resulting from these additional factors.

Modeling platforms have also been advancing. Modeling vendors began to use advanced modeling technology in 2000 to simulate more accurately the true physics of events. Today, more powerful computers and mobile communications have enabled physics-based models to reach the high level of resolution needed to provide location-specific forecasts. However, higher resolutions bring higher uncertainty and sensitivity in modeled results. This has led to a growing movement towards open models. Open models allow components to be more visual and accessible to users. They also allow for more efficient model validation and verification.

Status: The current focus of modelers is on reaching resolutions high enough to price insurance for an individual property's specific characteristics. However, the resolution of some perils, such as flood, is still evolving. Continued advances in computing capabilities and data collection instruments have the potential to fill model gaps and provide real-time modeling. This could significantly enhance modelers' capabilities to understand and to quantify catastrophe risk. This would be likely to increase the complexity of models and propel demand for more transparent and flexible models. This will probably bring an additional demand for trained catastrophe model and risk-management experts.

The NAIC Catastrophe Insurance (C) Working Group of the Property and Casualty (C) Committee serves as a forum for discussing issues and solutions related to catastrophe models. The Working Group also maintains the NAIC Catastrophe Computer Model Handbook. The Handbook explores catastrophe computer models and issues that have arisen or can be expected to arise from their use. It provides guidance on areas and concepts to allow for better understanding and to stay updated about cat models.

On March 23, 2021, the Capital Adequacy (E) Task Force met to discuss various issues. One topic that arose was how there are three different kinds of CAT models that deviate from the vendor models: (1) internal CAT models; (2) vendor CAT models with adjustments or different weights; and (3) derivative models based on the vendor models. It was stated that detailed instructions in evaluating the internal CAT models have been included in the risk-based capital (RBC) instructions; however, it was proposed that more in-depth instructions on the derivative model and the vendor models with adjustments may be necessary.

Furthermore, the NAIC Climate and Resiliency Task Force will continue to evaluate and review approaches to address climate risks, catastrophe modeling, and mitigation, and to identify sustainable solutions within the industry.