Threat Intelligence lead content creation

Martin Connarty
10 min readJan 25, 2022

--

The below was originally posted on my site — infosecamateur.com in July 2021

Follow me on twitter! @mconnarty

Note Jan 2022 -

Since I initially wrote this, Mitre have continued to develop the data source relationships that are used in the Mitre ATT&CK framework. They are now structured in a STIX format which allows the mappings to be machine readable.

To understand coverage against it, the challenge now becomes mapping the fields that you have available to you in the raw logs, to a common language (e.g. CIM in Splunk) which then maps to the higher level abstract Mitre Data Sources. The OTRF OSSEM Project is in this space, and this will be interesting to watch although I feel that in order for this to really take off it will need to be absorbed into the rest of Mitre’s catalogue and for the industry to fully embrace it.

Part 1 — Overview

About this blog post

Within the SOC a lot of our time will be spent on triage, analysing and documenting events as they come in. But this is perhaps half of our role, with the other half being the development of new content to help us protect against and detect the ever changing landscape.

I wanted to see if there’s a way we can find a structure and process for creating content which stops duplication of effort, allows sharing, highlights our gaps, and is repeatable and can be scaled. This blog post hopes to show what I’ve found so far.

Initially this was a one-part blog post, however as I am researching more and more and implementing these, I feel it may be better to update it and create 2 parts. The first, an overview of the different components, and the second will be a bit more of what that looks like in practice.

Introduction

To protect and defend in depth, it is hugely valuable to have a repeatable process and structure for creating the content that is based on the threats you encounter.

Modular in structure, and machine readable where possible, it will be efficient and allow you to ask good questions about your coverage such as where gaps lie.

On top of traditional Atomic IOCs (File hashes, IP addresses etc), the industry is moving more and more towards sharing detection methods via the Sigma project using STIX structured data. This will speed much of the below process up, allowing content creators to focus on adapting it for their environment.

The following is my view of how much of it maps together, which I’ve based heavily on elements of several projects such as Sigma, Atomic Threat Coverage and John Lambert’s Talk/Blog on the Githubification of Infosec.

The components:

Types of Threats

Security can be defined as measures taken to reduce the risk from a threat. It is therefore imperative to have some idea of the threats before the appropriate measures can be put into place.

A quote from Donald Rumsfeld, a former US secretary for defence stated:

“Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.”

This quote expresses what has been used for many years to map threats, here we will put this into 3 categories:

Known Known Threats — These are threats you are directly aware of, in a typical security context that may be a malicious URL or piece of malware.

Known Unknown Threats — Threats you may be able to determine based on the threats you do know about. For example, you may pivot on known malicious infrastructure to identify more, or you may look for other examples of a TTP being used. The TTPs presented in Mitre ATT&CK offer a huge repository of techniques which can be a great place to start to identify techniques to build detections.

Unknown Unknown Threats — Threats which you have no prior knowledge of, as highlighted above, these are a challenge as you have no prior knowledge. While many TTPs are made public, through reports or red-team reports, we have to remember that a skilled adversary will have a catalogue of unknown tools and techniques which they may want to hold on to until the right moment. (Think Equation Group). This is where traditional Threat Hunting such as looking through anomalies is valuable.

Threat modelling

Threat modelling is the method used to determine the threats and risks you are facing and most care about. Ultimately there are many different methods used in Threat Intelligence and it is a huge subject in it’s own right, a common tool seen is the diamond model:

Every organisation or person or internal resource will face different and shared threats, for example, in your organisation there may be Personal Identifiable Information (PII), which will face different threats to Intellectual Property. Different adversaries may want different things, and so good threat modelling can highlight the right areas of risk.

Threat modelling ultimately will be used to shape your priorities in deciding which Known Known, and Known Unknown Threats that you are most interested in.

Documenting the threat

When a new threat is known about, this needs to be documented. These threats could take the form of Adversaries (e.g. APT29), but also specific toolsets (e.g. Cobalt Strike) and even particular business requirements (such as high value assets). We can see an example of this within the Mitre ATT&CK Software and Groups.

You will want to compile your own what I’d describe as “Threat Dossiers”, which should record information such as:

  • Background description of the Adversary, Toolset or High Value asset.
  • Change log for what is known about it.
  • Links to any rules / TTPs employed (e.g. Mitre ATT&CK Techniques)

Developing the use cases

The term Detection Engineering encompasses identification of these TTPs and IOCs, into an abstract structure and then the subsequent development into rules for platforms such as IDS, EDR or SIEM.

For an adversary making changes, the “Pyramid of Pain” can help describe the difficulty in changing an indicator is for the adversary. As a defender and detection engineer, we must ensure that we do not remain fixated in detection and prevention of the lower level elements — while important to block, these have a far shorter lifespan and the further you abstract — the more likely you are to find an “unknown known” or “unknown unknown” threat.

(Image from https://www.netsurion.com/articles/the-pyramid-of-pain)

Security Controls

Being able to detect threats is great, but even better if you can prevent the threat as well. Part of detection engineering will be exploring the controls that you can implement that may stop a threat route. A simplistic example is a malicious IP address, if you know it’s malicious you can prevent it by blocking it on a firewall, but you also want to know if someone attempts to visit it as there’s questions that should be answered about why.

A determined adversary is unlikely to stop because of one of their methods have failed, and so preventing of threats through controls should always be a detection as well, so that the bigger picture can be determined and the response can be more comprehensive.

Security Controls are a big topic, and form a major component of many accreditation frameworks, such as Cyber Essentials, NIST 800 etc.

Abstract Rule (Sigma)

Simply put, a detection rule has an data input, some logic, and an outcome. Therefore detection rules should first be written in an abstract way which describes the intent but also documents the dependencies and ties it to any actions (playbooks) that should be taken when it does fire. Context is key and someone responding to the alert must easily able to understand why it fired and what it means.

Any other relationships can be mapped in this abstract rule, such as links to any other external references as well as a description of some of the background around it.

The Sigma project https://github.com/SigmaHQ/sigma is a generic rule format, rules can be written in Sigma and using tools such as Sigmac they can be translated to rules specific to your platform such as Splunk or Sentinel.

While some of the complexity in the eventual rule may not be possible to convey in a way that allows automatic conversion, the goal is ultimately to centrally document the logical purpose and all of the relationships it has.

Finally, rules have a lifetime and can be dynamic, the rule should reference any tuning change-log as well as documenting a review schedule.

Security Tool Rule

Once the abstract purpose of the rule has been determined, then part of the detection engineering process will be to write the rule itself. This can take many forms, it may be in the SIEM, in an EDR, in an IDS such as Snort, or anywhere else where logic is placed over data to detect.

The rule itself is unlikely to provide any of the context and references on itself, which is why having the abstract rule is so useful.

Data Source Requirements

Your ability to detect threats will depend on the visibility you have access to. For example, if you want to deploy IDS signatures, then you need to have IDS installed on the right network segment with the ability to inspect encrypted contents. If you want to detect events on an endpoint, you need to either detect them on the endpoint via a tool such as AV or EDR, or from the logs retrieved.

During the detection engineering phase, you will need to determine your best method for either detecting or preventing threats, which may be specific log fields from specific data sources. This exercise will allow you to identify gaps in coverage and can shape your priorities for covering them.

In support of this, Mitre have recently began to start mapping the techniques with the data sources required in order to detect them. Information about this can be seen here : https://github.com/mitre-attack/attack-datasources

Playbooks

If and when the rule does fire, the context provided in the abstract rule can help the person responding to it to understand it and investigate it further. It is worth documenting any suggested next-steps that should be taken to either aid the investigation or take remediating actions.

This should be modular — the specific actions, e.g. “Block an IP on the firewall”, or “Contact user” can be repeated through many rules and do not need to be duplicated.

Confidence Tests

Once either a detection or prevention rule has been put into place, there needs to be a way to know it’s going to work as intended. Sometimes this can be a case of manually creating the conditions that the rule would fire, e.g. running a command which would trigger the rule, but there are also several frameworks and tools which can help here, such as the Atomic Red Team tests from Red Canary and Mordor Datasets.

Mitre ATT&CK and more

The Mitre project and Mitre ATT&CK has become a benchmark for many security tools over time. As new techniques are observed, these are mapped into this framework along with information about the adversary, data source requirements and more.

More recently, the Mitre D3fend matrix has been created in order to help people to determine the capabilities needed to detect or prevent the TTPs in ATT&CK.

As highlighted above, Mitre are also restructuring the data source requirements (https://github.com/mitre-attack/attack-datasources) , so that defenders can identify exactly what they need in order to best detect.

Conclusion

The industry is moving more and more towards a modular, open source approach to a threat intelligence lead content development process, with Mitre leading the charge in many ways.

Mitre has become the vendor yardstick and so their work will in my view lead to a rapid acceleration of security capability as people are able to collaborate and share more and more.

Defenders will be able to focus more on defending and discovering new, unknown unknown threats than spending significant time adapting already known threats to their organisation.

Projects in this space/Further reading

The Atomic Threat Coverage project (https://github.com/atc-project/atomic-threat-coverage)

This does a great job in doing a lot of the above mapping, tying together the Sigma rule with playbooks using the ATC Re&ct framework, as well as the data sources and Atomic Red Team tests. Additionally it can generate Markdown and Confluence pages and display coverage through Kibana Dashboards. It was one of the main influences on the above.
Open Security Collaborative Development (https://oscd.community/)
The goal of this project is “ to solve common problems, share knowledge, and improve general security posture” and are

Microsoft Threat Intelligence’s John Lambert Blog “The githubification of infosec” https://medium.com/@johnlatwc/the-githubification-of-infosec-afbdbfaad1d1

John Lambert discusses many of the above concepts in far more detail, and was a major inspiration for the development of the above.

--

--