Security Analytics: How to rank use cases based on the "Quick Wins" approach?

When planning for a Security Monitoring project, no matter if it’s a rule that triggers alerts or an interactive dashboard to support hunters, once you have gathered an initial set of feasible ideas, where to start?

A Quick Win is commonly referred to as the result of “High Value” plus “Low Effort” combo. In practice, here's how I see this approach in context with an organization investing in a new project:

The "Quick Wins" is a reliable way of providing reassurance to management, including those who invested in technology and people, paving the way for longer-term goals and more ambitious deliverables.

So if you ever thought about that being a technical move, reconsider it.

Successfully carrying out this approach translates into several new and concrete contributions to a project made early, right after its inception, increasing Security Analytics practice visibility across the organization.

Without getting into an obvious question “How to measure success?”, assuming the answer varies vastly, depending for instance on how an organization deals with Risk (appetite, management); let’s assume the “Quick Wins” approach is the one providing you better chances of success.

The Scope Creep

If you’ve ever worked on a Security Monitoring or SIEM project, involving Big Data or not, how many boxes can you tick from the following?

  1. Security Engineers focusing too much on details (no big picture);
  2. SecOps ideas (let alone feedback) never making into the goals;
  3. A high-value idea discarded due to additional investment required;
  4. One-shot, unpolished set of rules delivered based on canned content.

Looking over the list, it's not hard to guess one common theme among them: the scope of use cases to be delivered as output of a SIEM project. No wonder why this scope is so important as it's tied to the project goal.

No clear, defined use case ideas equals "No goal"

One of the first things I ask for during a customer engagement is about their Risk Assessment reports or any related document that would help me understand how mature the organization is in terms of risk management.

In other words: it opens the "can of worms" to start use cases discussions.

When there’s some material available, the challenge is first understanding how current those risk reports are and later how to extract valuable input from them, which usually uncover lots of pain points.

In case you are lucky to find those, the exercise should be around balancing out the use case ideas, mainly SIEM rules in a SecMon context, with the highest risks highlighted in those reports.

Since the vast majority of organizations don’t have dedicated teams or established process around risk management, you should try getting input from somewhere else.

I suggest you get in contact with teams responsible for the following:

  • Security Operations (major pain points, recurrent, latent security issues);
  • Big projects/investments in security arsenal (clear red flags there);
  • Threat Modeling exercises (usually with help from data/system owners);
  • Compliance (not so exciting but gaps here hit hard when not in place).

A few organizations already have a dedicated internal team or practice for continuous security assessment (red/blue teaming), which can be leveraged to weigh the ideas, and also to evaluate the detection posture pre/post a use cases development sprint.

Use cases "pots"

If you're still looking for more ideas, there are many SIEM use cases pots out there. Follow #ThreatHunting hashtag on Twitter to have a taste.

There are several brilliant Infosec folks giving golden advice in there, it's a good way to learn and provide instant feedback.

The plan is to get inspired by those ideas and based on them build content (rules, reports, dashboards) that helps your teams speed up investigations. Think about a good alert as an entry-point for Threat Hunting.

To get started, here are a few resources to inspire you:

If you are into Splunk, take a moment to check David Veuve's Security Essentials App which is packed with use case ideas (and nice SPL code!).

In case you are facing a SIEM migration, one item to consider is moving old rules to the new platform. And here’s another use cases pot: rules that have proved to be useful, clearly linked to previous fruitful alerts (the case management system is the supporting data source here).

The goal is clear, now what?

Now assume you have a list of ideas after you've done your research. Where to start (scope) and how to deliver that in a continuous way?

This post focuses on the former but I will approach the latter soon in another article, including practical tips on how to to lay out a use cases development practice (methodology) in your organization.

Now given that you've collected enough input and there are hopefully a bunch of ideas in the pipeline. It would be great to draw a roadmap, right?

You need to be very careful at this step. As this road is sort of endless (continuous delivery), you need to make sure stakeholders get the right message, that is, the list should be seen as a kickoff, the current snapshot.

Tomorrow, after a new vulnerability is discovered, after a new report is published, or even after a kick ass idea is presented by the team, you must rank the new item and reorder the use cases backlog.

How to rank a rule?

Before starting here, let me put it clear: this scoring approach is -far- from being a sort of use cases deployment strategy based on risk or whatever weird science/black magic your organization may rely upon.

But rather, this is solely based on the estimated Value an alert will provide, and the Effort needed to develop and deploy a rule to trigger such alert.

What's the Quick Wins criteria? Remember we are looking for high value (benefit) combined with low effort (cost).

Laying out a ranking involves defining a criteria and factors to account for, and this is highly subjective. So think about it as a suggested exercise for building up your own system/formula.

How to rank or score a rule?

The goal is to basically gather rule ideas in a list and apply proper scores to them based on the "Quick Wins" criteria (broken down by factors).

So here comes our friend Excel for the rescue! Hopefully, this can be moved to a proper app or system soon. Imagine for instance filling it in directly from an Issue Tracker (e.g., JIRA).

Hypothetical list of use cases scored using MS Excel

The spreadsheet is publicly available at Google Drive. Feel free to move to GitHub or the likes, happy to update the link here.

So how to read and use it?

Basically, each of those factors under "Benefit" and "Effort" should be scored based on a value given by you or a team. Grasp on the factors meaning (rationale) and take the median score from each member of the team.

Obviously, having an odd number of participants (≥3) makes it is easier.

The score scale goes from 1 to 5. A score of 5 means the highest value or lowest effort. A score of 1 means the lowest value or the highest effort.

Below is the meaning (rationale) for each factor or how to assess each use case entry in order to determine its score. More hints are available from the cell comments in the spreadsheet (hover over the cell for viewing them).

— — — — — — — — — — — — — Benefit— — — — — — — — — — — — 
A list of attributes expected from a high-quality alert.

  • Relevance = How (security/business) relevant is the alert?
    Overall priority and org visibility are captured here.
  • Fidelity = How reliable is it?
    From not reliable (1) to extremely reliable (5). Think about the false-positive rate this alert is subject to.
  • Severity = Which severity/impact label will likely stamp this alert?
    1 — Informational
    2 — Low
    3 — Medium
    4 — High
    5 — Critical
  • Clarity = How easy for an analyst to understand and assess it?
    Does it require extra effort to triage? Captures complexity to understand.

— — — — — — — — — — — — — Effort— — — — — — — — — — — — — 
Basically, factors mapped from rules development stages.

  • Prereqs = How easy to get prerequisites (data sources) in place?
    It means how easy to get everything ready before prototyping a rule.
  • Coding = How easy to prototype or craft a query/search for this rule?
    Of course, the more complex the rule is, the lower the score is.
  • Testing = How easy to test and validate the rule?
    Think about a use case that will require a lab or red team exercise.
  • Document = How easy to document (handling guidelines)?
    This also includes the handover to the SOC (demo, etc).

As you can see, they are sorted by weight (importance), the first ones from each criteria being the most important and more subjective.

The weights should be changed according to what you or your org values most. The criteria score results from the sum of each assigned individual factor score multiplied by its weight.

The overall score is the result of Benefit x Effort scores. That simple.

"But why building a rank if I already know how to sort my list?"

Good for you, but that's usually not the way it works.

Besides helping on prioritizing the development backlog, when there are different opinions — and there will be, it's easier to follow a standard criteria, leading to consistent results.

As a good exercise to evaluate this ranking system save your sorted list and later compare to the end result after following the approach presented here.

In case the end result is close to your sorted list, it means the criteria, along with the factors and weights are the ones making sense to you or translating the way you value an idea based on "Quick Wins".

Happy to get feedback, enjoy!

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.