What Are You NOT Detecting?
What are you not detecting?
OK, what threats are you NOT detecting?
Still didn’t help?
What I mean here is: are you thinking about these:
- Threats that you don’t need to detect due to your risk profile, your threat assessment, etc.
- Threats that you do need to detect, but don’t know how.
- Threats that you do need to detect and know how, but cannot operationally (e.g. your SIEM will crash if you inject all the cloud logs).
- Threats that you do need to detect and know how, but do not (yet?) for some other reason.
- Threats that you do need to detect, know how and think you detect, but you really don’t (oops…).
- Threats that you do need to detect, know how, and do detect — but too late for any useful outcome (e.g. a week later for ransomware).
This is useful. This is like an unholy marriage of your “not-to-do list” with your bucket list :-)
Let’s go through these one by one and have some fun.
#1
To me, it is very useful to think about what you do NOT want to detect (item 1), because I’d rather it be an explicit and intelligent (also, intelligence-driven) decision, not a byproduct of some broken security process or some, ahem, intern deciding it. However, we all know infosec/cyber/IT is awesome at intelligently assessing risk … right? Right?!
This means that when making a decision to not detect something, the fact base for this decision must be solid. Also, “a rule” to not detect something or, more practically, an exception to a rule to detect something must be much more prescriptive than a rule to detect something…
#2
IMHO, it is absolutely essential to think about what you need to detect, but don’t know how (item 2). Ultimately, if an attacker thinks about it first, you’d be in hot water — and deep hot water at that — because you sort of “knew about it.” This is an area of threat research, ripping indicators out of artifacts, studying TTPs, etc. Basically, “get better” is the answer here.
Furthermore, just like in the good old days people when would architect the networks with choke points for placing NIDS devices, we need to architect for modern detection. This will reduce the “want but can’t” situation.
#3
This case is different from from the previous one, because you know what you need to do, you just cannot do it (item 3). For some organizations cloud threats are a big part of the “known but infeasible.” Frankly, I’ve seen enough cases where the public cloud environment is one big detection gap. This may be due to technical limitations marring detection or economics preventing some technical choices.
#4
This (item 4) one is simple — conceptually, that is, but not operationally. Detection roadmap — that evolves with threats, naturally — is a great idea, but it implies that there are things that are not detected today. Detection in depth may be a part of the answer here (e.g. we want to detect this early, and we will, but for now we detect this at later intrusion kill chain stages).
Of course, you will never be able to proactively detect everything you need, should and want. Additionally, detection is a moving target and so there is no static goal where you have 100 rules and say “wow, I’m done!” As a result, this applies to everybody, whether low or high maturity, but being explicitly aware of it is useful.
#5
Thinking you are detecting something while in reality you don’t is a major source of hilarity … not (item 5). This is the area of red/purple teaming, attack simulation and other methods for consistently validating (a) and aggressively testing (b) your detections. Yes, you do need both the boring (such as checking the mappings vs ATT&CK and other detection rule consistency checks) and the exciting (such as red teaming without telling your SOC).
Now, everybody and their dog maps detection content/rules to MITRE ATT&CK. But sometimes the devil is in the details. You can see gaps between you and some generic model, but not gaps between what you have and what you truly need. This is why security is fun…
#6
Finally, late detection (item 6) is a case where “better late than never” principle does not work well. Still, I want to be mindful of this when I am thinking about my threat detection strategy. Sometimes the timing makes a difference between a success (catching ransomware before it encrypts) and a failure (like, I dunno, detecting ransomware by looking for a ransom note). Detecting timing analysis perhaps calls for further study (and reminds me of this book)
So, thoughts? :-)
Thanks to Brandon Levene for his super insightful comments!
Related posts: