It All Comes Down to the Data

Accurate, appropriate peril metrics are fundamental to physical climate risk modeling and reporting

Meghan Purdy
Jupiter Intelligence
6 min readJan 19, 2021

--

As Jupiter Intelligence described in a new special report in November, regulators worldwide are designing and implementing new standards for quantifying climate risks for investors, insurers, and corporations, with the Task Force on Climate-related Financial Disclosure (TCFD) recommendations becoming the standard.

The Bank of England, widely considered the foremost regulator on systemic impacts from climate change, is requiring stress tests and the use of climate scenarios to apply to a firm’s short- and long-term business outlook. With the United Kingdom recently announcing that most sectors of its economy will be subject to new climate disclosure regulations, it’s more important than ever that companies ready themselves for this new reality.

Where to start? The climate data, of course. It’s a given that this data should be grounded in the latest consensus climate science from the scientific community. But it’s equally important that the right metrics are extracted from climate data to answer the right questions, such that firms’ economic models are fed by metrics that make sense. Only then can companies and organizations adhere to regulations, adjust business strategies as needed, and improve their resilience to the material risks to come.

Transition Risk vs. Physical Risk

To date, climate analysis has largely focused on the quantification of transition risk.

In some ways, transition risk is an easier leap for firms to make. For firms used to forecasting consumer preferences in order to understand demand shifts, or modeling oil shocks to understand supply risk, transition risk is a natural extension. Instead of a shift away from gas-powered vehicles being driven by consumer preferences, the shift may occur due to government vehicle efficiency standards. Or instead of a tariff on steel driving up supply costs, the steel is more expensive because it must be produced with a smaller carbon footprint. Firms typically have teams of analysts creating forecasts and pumping figures into their enterprise risk management framework. Transition risk is no different.

Conversely, physical risk assessment is outside most firms’ wheelhouse and is only recently moving from black to grey swan territory, as I discussed in October.

Certainly, a firm’s engineers and economists are well-versed in how their assets are affected by flooding, high winds, lightning, extreme heat, and other perils. But that is only Step Two in physical climate risk; Step One requires an accurate measurement of that future peril. Climate risk is not immune to the “garbage in = garbage out” concern. No amount of impressive economic modeling in Step Two can make up for foundational peril data that are full of cracks; if a utility prepares for one meter of water instead of two, the wrong resiliency measures will be taken. That foundation can only be solidified if climate scientists are involved to ensure that these peril metrics are both accurate and appropriate.

Defining Accurate, Appropriate, and Transparent Climate Data

Let’s examine what it means for peril data to be accurate. It might be tempting for firms to grab climate data directly from a public source, but analysts need to ensure:

  1. The data is assessed for bias
    It’s occasionally seen in climate modeling that bias in the model is larger than the climate signal itself. For example, a model could be underestimating temperatures in the Indian monsoon season by 2°C and thus obscure a temperature rise of 1.5°C. Or a model might be fairly predictive in the mid-latitudes but weak near the equator. While climate scientists have established methods to identify and correct bias, data consumers need to confirm that these methods have been responsibly applied to their chosen data sources.
  2. The data is downscaled
    Global Climate Models (GCMs) provide data at a resolution of about 100km (60 miles), but numerous environmental factors determine how that weather tends to manifest at the local level. For example, surface roughness from trees and tall buildings alters wind risk, and local ground cover affects the fuel available to wildfires. Low-resolution data can mask many extremes that may make specific neighborhoods too hot to inhabit or severely flood-prone. (Look out for a post on downscaling authored by Jupiter Intelligence’s head of earth and ocean systems, Josh Hacker.)
  3. The data can model extremes
    Firms are often asked to perform climate stress tests, i.e. potential losses under extreme situations. By definition, these events appear very infrequently in climate data. For example, a 100-year flood event should only occur once in 100 years of simulated weather. Climate scientists solve this data availability problem by using statistical methods, blending multiple models, and using ensemble approaches, in which the same GCM provides multiple simulations of the future. Data consumers who are performing stress tests should be particularly attuned to these pitfalls.
  4. The data is up to date
    As I discussed last November, given the rapid rate of improvement in climate modeling, it’s crucial to stay attuned to the latest developments in the scientific community. Climate metrics should be based on models from the sixth and most recent phase of the ongoing Coupled Model Intercomparison Project (CMIP).

In addition to accurate data, firms must also consider whether their chosen metrics are appropriate for the task. Imagine an electric utility that needs to understand future hail risk at their photovoltaic plants. They know the vulnerability of their assets to hail, but first they need to know how often they will be exposed to it. Unfortunately, hail isn’t directly output by GCMs, so they might be tempted to use a proxy like Convective Available Potential Energy (CAPE) instead. That method will introduce a lot of uncertainty into the process: few GCMs directly measure CAPE, so the assessment will be prone to bias; and while CAPE approximates the potential for convective storms, those storms don’t always cause hail, so the potential for error with this assessment will be high. It would be more appropriate to directly project hail risk by building a model from several metrics such as CAPE, wind shear, and mixing ratio. That model’s performance against historical hail events should then be assessed to determine its bias and predictive abilities before the utility uses it for decision-making. It’s a complex task, but it’s absolutely necessary for a high quality analysis.

The challenge of data appropriateness also extends to the choice of scenario. As my 2020 post also notes, the inclusion of new Shared Socioeconomic Pathways (SSPs) into CMIP6 is an important development. This helps us understand how certain socioeconomic conditions can lead to radiative forcing, and how that forcing in turn causes the climate to change. Said another way, it gives users a consistent lexicon to tie the actions of our society to consequences for the planet. This is critical for companies conducting scenario analysis and, in particular, for gaining a complete view of the interconnectedness of physical and transition risk.

Finally, climate data must be transparent. Methods should be open so that data consumers can assess the quality of results based on the issues outlined above and, where relevant, convey that to regulators. Transparency also extends to openness about uncertainty: despite the many leaps that scientists have made in projecting climate risk, significant uncertainty remains. Data providers need to be open about the sources of that uncertainty and quantify it wherever possible. That quantification is critical to help firms convey the range of possibles that may occur.

Working with Data Partners

Physical climate risk can be challenging to quantify. Raw climate data is widely and freely available, but it requires significant resources and expertise to extract, de-bias, downscale, and process into appropriate metrics to feed firms’ economic models. And unlike transition risk, firms are unlikely to have this expertise in-house. As firms decide which climate data will underpin their analyses, they must demand transparency from data providers and assess the metrics’ accuracy and appropriateness for their needs. Only with this foundation in place can firms feel confident that their climate journey is starting off on the right foot.

Meghan Purdy is a Senior Product Manager at Jupiter Intelligence. Learn more about Jupiter at jupiterintel.com.

--

--