Assessing a Detection Engineering Program for Maturity
What does your portfolio tell you?
I think it goes without saying, that majority of people have tried a fitness or health app to help you reach some goal you may have. You need to take more steps, your caloric intake needs to be under x — because you sit around too much during the day (guilty!). How about an experiment… you open your health app if you have one, and check your sleep quality — actually!, no.. don’t do this.
Being healthy does require some self-evaluation, and this self-evaluation brings about that moment where you stand in front of the mirror see yourself for what you are, the good and the bad. Then you have to make decisions of what to improve.
My Point — Checking on your Threat Detection Engineering Portfolio or Program is really similar, you need to self-reflect in an unbiased way, and take in the good and bad to become a more mature program. This brings me to some of the lessons learned when doing evaluations, and building out a Threat Detection Maturity Framework for practical use.
Credits & References
As always give credit where credit is do, none of what I am about to talk about is something that I whimsically made up.
First I’d like to acknowledge where I got the foundation of the Framework which is @haidermdost, an Incredible Engineering Leader at SnowFlake. Haider’s initial ideas of maturity as something to be measured, and tracked is game changing. I highly encourage you to check out Haider’s initial foundation of The Threat Detection Maturity Framework — https://medium.com/snowflake/threat-detection-maturity-framework-23bbb74db2bc
Next, I want to acknowledge Kyle Bailey with his work on developing his edition of a Maturity Framework which can sampled here — https://detectionengineering.io/ and you can watch his talks here. A great idea that resonated with me from Kyle’s implementation is developing Core Responsibilities that thematically push detection engineering scope and strategy.
Both of these resources were foundational and incredibly helpful. Each have been incorporated at some form or fashion.
Lets Begin —
The Bottom Line
What you can control, what you can’t control, and what you can influence.
Within the detection engineering discipline, there are many things you may have control over depending on the positioning and size of your Threat Detection Engineering Program. This is not the same for everyone. For instance you may have huge control over how to detect adversaries through advance data analytics or control for how your alerts are triaged and investigated.
But also — As you may come to know, there are many things that you cannot control within the sphere of Threat Detection Engineering (and that is okay). You can’t control when the next Zero Day drops for fill-in-the-blank technology, or when the next major Black Swan event happens in the industry. You cannot control to a point what gaps in monitoring or capability tools may have. You may not be able to control what technologies the organization chooses to use to protect itself.
However, the goal of a Threat Detection Engineering Program is neither to garner control or call out the lack of control. But rather, to foster influence. This is where the Threat Detection Maturity Framework really thrives, and allows for programs to scope, prepare, and apply the needed changes to its own strategy as opportunity arises.
Threat Detection Maturity Framework
The Threat Detection Maturity Framework is really a standardized method for assessing a Threat Detection Program’s performance through identifying areas of strength and areas of weakness. Additionally, it can allow you to understand major dependencies or blockers to push your detection capability to the next level.
The Framework consists of identifying areas of responsibility, which breaks down the Threat Detection Program by various operations and technologies, then ranking their maturity by levels. (Please Check out here for levels)
These levels are not set in stone, but they are foundational to the framework meaning you can always adjust what they mean. At a high-level these are the maturity levels, and how I defined them.
M1: Maturity Level 1: Ad-Hoc
- Maturity Level 1 is consider to be ad-hoc meaning, the engineering effort, procedures, strategy, or technology is not heavily structured. This can be impactful to efficiency, resources, and capabilities which can introduce limitations and barriers.
M2: Maturity Level 2: Organized
- Maturity Level 2 revolves around more proactive actions, defined procedures, organization, improved strategy, and technology. This is an area where engineering is engaged regularly with familiarity, and even some developments that are working towards greater improvements. Here some limitations may exist, but the program is overall operational in various areas.
M3: Maturity Level 3: Optimized
- Maturity Level 3 is the top maturity that can be achieved which is considered fully optimized solutions. When optimization is achieved strategy is well-defined, automation is increased (or even end-to-end), solutions are utilized to their full extent, and the program as a whole is fully engaged and proactive whether its a process or technology involved.
Additionally the Maturity Framework then breaks down various subjects and areas of responsibility such as things like Processes, Personnel and Data Tools & Technology. I won’t explain them all but this is the lens that I used for deciding how to further breakdown and define the framework.
Threat Detection Program Facets
There are two overarching categories which I describe as Threat Detection Engineering Technology, and Threat Detection Engineering Strategy.
These themes can be broken down as the following:
Threat Detection Engineering Technology — The technology that enables the program to reach its objectives.
Threat Detection Engineering Strategy — The operational methods that enables the program to thrive and operate effectively.
These two main themes describe how a program can be technologically behind, or advanced additionally, if operations are broken, too manual, or efficient. Lets take the following as an example:
Lets say the organization you are protecting has technology that allows you to leverage various models for machine learning which you leverage as contextual events for high-fidelity correlation detections. But the entire process of maintaining these configurations or searches are cumbersome, additionally you have no versioning on the configs or query languages etc. You could argue that you technology maybe more organized yet your process is ad-hoc. Right there — you can identify an area of improvement. This is what the framework pin-points.
Lets Continue —
Threat Detection Engineering Strategy
Within strategies we have various operation driven categories that are expanded. These are at times subjective categories that can somewhat vary depending on the size of the program.
Threat Detection Engineering Technology
The technologies of a detection program are high-level with an effort to fully capture what would exist in a more robust detection program.
Applied Maturity Frameworks
The biggest challenge with the maturity framework is figuring out a way to practically apply it to your organization to where it produces meaningful information that can be acted on. I decided to break each area down into their different levels of maturity so that this can be scored. (No need to squint, this can be viewed here.)
Each subject is broken down by their tag so its easily digestible. The tagging is category.maturityLevel.subjectNumber
. Therefore, for the Process category maturity level one (ad-hoc), the subject “No standardized acceptance criteria for detection development” would be PO.1.10
and so on.
Processing and Scoring Maturity
I found that the best way to understand maturity was to survey engineers within the program. They essentially would mark what was applicable based on their knowledge of the program for each subject. This then would be calculated to understand how many votes certain subjects, maturity levels and categories would obtain. Ultimately enabling visibility on what level of maturity was the highest scored for each subject.
How to do this is simple, every engineer can be surveyed, their scores on each subject collected, and then determine the percentage based on votes from engineers i.e. 60% of votes for x category was ad-hoc.
In practical use —
Lets say for the Category of Data, Tools, and Technology the overall voting from engineers generated maturity score:
- 10% Optimized
- 65% Organized
- 25% Ad-hoc
Investigating what was scored Ad-Hoc we find that for the SIEM technology DT.1.5
= “Integration to SIEM for various tooling is ad-hoc, and difficult to implement” was a top score for its subject. This will give the team the ability to drill down, and take up priorities for improvement on how SIEM integrations are managed and configured.
Tribal Knowledge vs Knowledge Tracking
More often than not, an engineer reading this would know that within their very own program in their head they have this list of problems already known, so why would another framework be any different? This did cross my mind as well when I went through this journey and ran through Haider’s initial Framework.
But what I found was that its not just the identification of problems but more so than anything its the removal of tribal knowledge concerning gaps and issues while communicating them in a way that leadership will understand. This helps the program strategically overall because leadership will be educated on where the Detection Program is challenged. This also helps with leadership understanding their own programs limitations and starts conversations around how we improve and get from point A to B or Ad-Hoc to Organized.
Using this in a practical sense its been helpful in generating better strategy overall within the program and targeting areas where we may find an immediate impact. It garners trust with leadership, and understanding that the program is pushing to improve as it supports the overall business.
A Nod Voters Bias
As I said in the beginning, an unbiased self-evaluation…
There is a disclaimer, that this method of voting on your own program with your own engineers may skew results due to a potential for bias. This is known, and something you may have to be cognizant of but the goal overall is to grasp Program Maturity. Suppressing bias is not an easy task, especially when you are grading your own operations. Yet, I found so far that the best way to understand maturity is by working with engineers directly.
Side Notes
- Some of the maturity subjects may not be well defined or perhaps too subjective. There are cases where I am not an expert on what is mature vs not for specific subjects. Therefore, if you have comments for changes or adjustments let me know!
- There may be some overlap as well with different subjects.
- Remember maturity depends greatly on the organization, so every area may not be applicable. I tried to generally capture main subjects within this space, based on the initial framework.
Thanks
Thank you for reading if you have made it this far. Its been a fun journey researching and connecting with a great community of engineers. The initial built Detection Engineering Maturity Framework is top-notch please review it when you have a chance.
I hope to publish some more material in the near future.
If you have any comments let me know!