Does Your Incident Evidence Really Lead to Better Intelligence?

Anton Chuvakin
Anton on Security
Published in
4 min readSep 13, 2019

--

So I admit this post is not about security incident response in general (because I’ve written enough on that in the past), but about a link between incident response (IR) and threat intelligence (TI) in particular.

We definitely talk about how TI helps us understand an ongoing incident. Without mentioning the dreaded “A” word — attribution — threat intel can separate run-of-the mill ransomware from elite state attacker. This has been covered enough in the past.

On the other hand, past incidents can and should generate intelligence, but in real life they don’t, in many cases. This is mostly the “Lessons Learned” phase of Incident Response, which is, frankly, often rushed through or approached very narrowly, in a non-intelligent manner.

This blog is a lament about it. Is lamenting useful? Perhaps as a motivation for the less motivated. Or as a reminder to the less informed. Or perhaps as a little push to those who sit on the fence about it…

Where does IR provide intel? For years, we’ve seen elite IR firms supply their TI divisions with intelligence, so why hasn’t this become more popular among the rest of the world?

After all, we all have incidents. We all want to not step into the same poo again. We all want to appear smart and save face (yes, I am preparing for a trip to Japan :-))

Creating intel from past incidents appears to be one more practice from the list of “everybody enlightened does it, but there is no trickle down to the mainstream” (as a funny aside, hunting perhaps started to trickle down a bit, but occasionally it takes the form of a “cargo cult” hunting)

So, why not learn from your incidents by creating threat intel?

First, there is a type of an organization that just does not learn. Security IR at such a place is really just the proverbial “nuke from orbit” and no investigating. So, all intelligence dies with the reimaged disk and powered down memory.

Second, some learn very narrowly like “oh, we got hacked via struts? Now, we are going to patch struts.” What can I say to this? Nothing, really, I am just going to leave it here :-) A degree of operational agility would also come handy here.

Third, occasionally people get in the way: while having a separate IR and TI teams is wise at higher maturity levels, it is assumed that (at said maturity level) the teams actually work together. Now, do they? I’ve seen examples where IR team deems incident evidence too sensitive to become intelligence, for example. Or just want to own it. Building a wall between TI and IR or having it open in one direction only is not a good idea.

Fourth, in some cases there is simply no good place to collect, keep and utilize this intel. Most TIP vendors nowadays seem to focus on pumping through large volumes of indicators and not on handling incident intel like local malware captures (sure, there is MISP and friends, but perhaps they are not for everyone).

Fifth, and this is a subtle one: extracting indicators and other intel from incident evidence implies that your threat model includes threat actors where such a practice is justified. For example, there is no sense in sandboxing and analyzing malware that is actually stopped by anti-virus (in most cases). What I saw in the past is that the operational process for processing incident evidence into indicators and other intel can be created only after the foundation of threat assessment (i.e. knowing who is out to get you), robust IR, robust TI is laid down.

So…. what should you actually do?

My conclusion today will bore literati to tears, but will hopefully help the mainstream: DO make use of your own incident data to create intelligence that ultimately helps you, the defender. As a first step, DO spend a few hours contemplating the most recent incident you had with the intel lens in mind (i.e. what can we extract and use for detection content, qualifying other incidents, share with peers, etc) This will imbue the Lessons Learned phase of your IR with specific value.

How do you accomplish that? How do you actually convert the “IR outputs” into “intel inputs”?

A classic IR process already implies that the Lessons Learned phases includes questions like “What did we learn during the course of this incident?”, “Where did our investigations lead?”, “How could this have been prevented?”, “How it could have been detected faster?” and so on.

If you extract indicators and gather intelligence from the evidence at hand, you may also shed some light on “Why were we targeted?”, “Have we seen this threat actor [group] before?” and perhaps even “What is the level of actor sophistication?”

In addition, the indicators extracted from logs and attacker tools should lead to new detection content for SIEM, EDR and other tools. For example, a piece of malware will render a hash (duh!), a set of IPs it connects to, a hostname or a URL where it came from, and perhaps a protocol peculiarity or two (if its communication is in any way unusual — it usually is).

Frankly, even a chain of your systems that the attacker went through may reveal something you can use in the future. Types of credentials utilized, a packaging tool used (if data was staged for exfiltration) may teach you something what this or other attackers may bring next time. What the attacker did at each step also may teach you about the weak spots in your layered defense.

Finally, I’d give you a quick bit of the opposite advice because “The opposite of a fact is falsehood, but the opposite of one profound truth may very well be another profound truth.” Use past incident data, but don’t obsess on it — the attackers may well throw something completely different at you next time…

Related blog posts related to threat intelligence:

--

--