Forecasting a headline risk: NetSpectre
When headline a risk comes around, it can sometimes lead to massive uncertainty and confused decision making. A security organization will ask itself:
“So, what do we do, and how quickly do we need to do it?”
NetSpectre advances exploit techniques available for the Spectre vulnerability. The Meltdown and Spectre vulnerabilities are still fresh in our minds: They were described as “catastrophic” and led to massive patching efforts worldwide.
Similar headline risks include Shellshock. We saw “in the wild” exploitation almost immediately following public disclosure.
Despite the similar headlines, we have not seen the same substantial exploitation of Meltdown or Spectre.
Does NetSpectre warrant an urgent response? With what urgency should institutions that mitigate risk pursue this? We’re already a few days in, without immediate attacks.
Something I’ve been exploring over the past couple years, is the role of estimation and forecasting in all aspects of security, and I’ve applied my learnings to better understand NetSpectre.
Let’s forecast the urgency associated with NetSpectre.
I’ve gathered 13 professionals in the security community. We had a forecast within hours. They included builders and breakers, senior and junior, engineers, managers, and journalists of varying perspectives. Some of them I have met personally, and others are a degree away. This panel selection was to help improve the quality of a forecast through some intentional de-biasing.
I created an unambigious scenario to forecast against, with clear conditions about outcomes.
Will attacks using NetSpectre’s methods be observed by the security community “in the wild”?
The options to forecast are:
- Yes. By end of July.
- Yes. By end of August.
- Not before end of August. (This also includes: Never.)
I created judgement criteria that was casual and described as follows. In the future, these could be far more strict and specific, for industrial forecasting of our risks.
This adds complexity to the forecast: Panelists would be evaluating the attack technique and vulnerability, as well as a judgement of my own reliability to analyze it correctly. That’s generally ok as a learning exercise for more substantial forecasts in the future. We’ll see how it goes with a casual judgement criteria. IE, no need to bring a blockchain into this yet.
Additionally, this was many of the panelists first experience in forecasting. You’ll see that experience show in the results, but this is also the benefit from ensemble forecasting. The roughness of individual forecasts smoothen out, and the impact of a wild forecaster takes less of a toll when averaged. This could be improved with calibration training.
The early results.
I will be using words of certainty from Sherman Kent going forward.
The ensemble forecast was almost certain (88.5%) is that we won’t see any attacks before September. They believe there will almost certainly not (8.3%) be any attacks before the end of August, and almost certain (3.1%) that no attacks will happen before the end of this month, July.
Why these forecasts?
Most of the opinions discussed were that NetSpectre’s technique would be largely slow and impractical, high exposure risk to an adversary, and would require very odd exploitation circumstances to succeed. The largest influence towards earlier options (a July attack) was mostly due to the possibility of a breakthrough, further developments, or just the shortcomings of the forecasters own knowledge.
A little bit about my forecast. (18% July/ 9% August / 73% Sept+)
I don’t plan on revealing the panelists. They’re free to discuss their forecasts publicly if they’d like. I’m the 🦄 in that image. In trying to understand the paper, and the accounts interviews of the researchers who discovered it, I agreed with their own assessment that attacks would be difficult and unlikely. However, I didn’t want to err my bet too strongly towards a certain position. If I did, my forecast will “bust” very hard if any further activity advanced their research. So, I made sure my forecast erred more toward an “uncertain” position, which would have been (33%/33%/33%). This turned out to be far different from the panel, which expressed pretty significant confidence in several cases. That’s ok.
Remember, all forecasts are generally “wrong”.
A forecast is typically seen as “always wrong”, this is a lesson from meteorology where forecasting is common and difficult to be precise. The running joke is that meteorologists are all paid to be wrong on a daily basis.
That’s ok — their forecasts still inform us on valuable decision making.
This ensemble forecast will be wrong in September, too. The true outcome will have conclusively been 100% in only one of these categories, and the correct forecast would have been 100% as well.
If it turns out that we are surprised by a NetSpectre attack in the wild before September, that would not be unusual. The odds from this forecast put this “Surprise” at ~11%, which is absolutely high enough to be in the realm of possibility, and enough to think about, but perhaps enough for organizations to sprint towards.
It’s not low enough to dismiss the threat completely. It is high enough to pay very close attention to, but this fluctuates with the risks you care about and your own goals.
A quantitative, forecast driven approach to a headline risk has been demonstrated. NetSpectre is a terrifying and innovative development, but will almost certainly not be observed with attacks “in the wild” in the immediate short term.
The message is not “do nothing”. It simply helps prioritize this urgent task, with other urgent tasks. You may have more impactful issues that could be exploited anytime next week, which would reasonably come first.
And, given the role of a forecast (it is not authoritative “future data”), it’s still important to invest in incident response, because we’ve all been wrong before.