Get the Story Behind the Score

Scores can be a useful way to approximate potential physical climate risk — but only if they clarify, rather than muddle, insights about the underlying climate metrics.

Meghan Purdy
Jupiter Intelligence
5 min readApr 22, 2021

--

Photo by Andrew Malone

When we think about physical climate risk, what makes a location risky? It’s a question that my colleagues at Jupiter and I often ponder. We’ll execute a climate analysis on millions of locations on behalf of a customer. Our ground-up approach means that, for each location, we produce thousands of metrics: the flooding, hail, extreme heat, and other perils found historically, today, and in the future, and how they might change under various climate scenarios. But before we get swept up in economic loss modeling or aggregating to a portfolio level, we like to take a breath and understand what our climate output is telling us. What’s the story here?

Last spring, as we were preparing to release ClimateScore Global, we spent hours building dashboards and trying to identify the “right” slices of the data to uncover the story. One sticking point emerged quickly: certainly the change element of climate change is what our users care about most, right? But is it relative change, or absolute? Who is at higher heat risk: Dubai, which is expected to pile on more extreme heat days onto its already lofty total; or Paris, where air conditioning is rare, and very hot days (> 35°C/95°F) could be 10x higher by the end of the century?

Chart comparing Dubai and Paris’ hot days

Obviously, Dubai needs to be flagged for high heat risk. But I’m also worried about Paris, which will have to make a transition from an occasional extremely hot day to weeks of extremes.

Future Shock: Sometimes, It’s All Relative

We can differentiate between Dubai-like risk and Paris-like risk only if we separate the concept of present-day risk and the expected change in that risk. Helpfully, these can be plotted on a chart:

Four quadrants of Risk vs. Change
  • Bottom-left quadrant. If present-day risk and the level of future change is low, this is labeled as “Good.” For example, heat risk in a place like Edinburgh might be low today, and be expected to remain low in the future.
  • Top-right quadrant. If present-day risk and the level of future change is high, this is labeled as “Bad.” A place like Bangkok already experiences high heat, and its risk is climbing quite rapidly in the future.
  • Bottom-right quadrant. Present-day risk is high, but it isn’t changing too quickly in the future. These locations are likely to be accustomed to high levels of risk and may have already deployed adaptation measures. For that reason, this quadrant is labeled “Scary but manageable.” For example, Dubai already has many days per year of extreme heat, and while that is rising in the future, its deployment of air conditioning throughout the city will make it more resilient to change.
  • Top-left quadrant. Present-day risk is low but is expected to rise very quickly, relatively speaking, in the future. The end state for these locations won’t approach that of the “Bad” and “Scary” quadrants, but it will be high compared to their current risk level. For example, Paris currently experiences about 1.5 days per year of extreme heat (defined as a high temperature exceeding 35°C / 95°F); that’s low enough to make air conditioning uncommon. That number could rise to 20 days per year by the end of the century. While that’s not as high as Bangkok or Dubai, it will be quite disruptive to the population.

Scores Tell You Where to Dig

As climate analytics proliferate, scores are becoming a useful tool to guide non-scientists in the “right” direction when weighing the riskiness of their assets. But users should be aware that scores are inherently a judgment call. An asset’s characteristics, its value, its expected holding time or remaining useful life, and other factors may dramatically affect how consequential the risk actually is and the timeline over which we want to measure how that risk will change. Those factors can only be assessed by looking at the underlying metrics. For that reason, a useful score is one that highlights the riskiest locations so that users know where to dig further. Thus, there are certain characteristics of “good” and “bad” scoring methodologies that users should assess before jumping on board with a tool:

A framework for analyzing the features of scoring methodologies
A framework for analyzing the features of scoring methodologies

At Jupiter, despite my own product being named ClimateScore Global, we initially held back from publishing scores. We wanted our users to gain confidence in our underlying peril metrics, and we wanted to be careful about asserting our own judgment on the inherent riskiness that our metrics reveal. But as I write today, our own understanding of riskiness has matured just as the skill and the need for scores have emerged. The sixth and newest release of ClimateScore Global, v2.2, now includes “risk,” “change,” and “overall” scores per peril. While the above example only discusses extreme heat days, each peril’s score is based on several underlying metrics per peril, including uncertainty. Over the coming months, I’ll be talking about how each peril brings unique challenges to the problem of quantifying climate riskiness, and I’ll be revealing the insights that emerge from it.

Meghan Purdy is a Senior Product Manager at Jupiter Intelligence. Learn more about Jupiter at jupiterintel.com.

--

--