Summiting the Pyramid: Level Up Your Analytics

Jon Baker
MITRE-Engenuity
Published in
6 min readSep 13, 2023

--

Written by Roman Daszczyszak, Steve Luke, and Ross Weisman.

The Pyramid of Pain introduced the world to the idea that if defenders focused on identifying and detecting adversary tactics, techniques, and procedures (TTPs), it would be harder for adversaries to evade detection. The higher up the Pyramid a defender can detect, the greater the cost imposed on the adversary.

The Pyramid represents the relationships between different indicators and how much pain it would cause the adversary if you were to remove those indicators from their toolbox. As you see below, hash values are trivial to replace whereas TTPs are very difficult.

Figure 1: David Bianco’s Pyramid of Pain (https://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html)

The MITRE ATT&CK® framework supplemented this idea by providing a library of known adversary TTPs which we could monitor and detect. To write analytics that are hard to evade, detection engineers aim to write analytics that fit into the top of the pyramid by focusing their research on understanding and detecting specific ATT&CK techniques. However, many analytics fail to reach the summit because they are dependent on a specific tool or artifact that falls lower on the pyramid. Enter Summiting the Pyramid!

In partnership with CrowdStrike, Inc., Fortinet, Fujitsu, IBM Security, The Microsoft Corporation, and Verizon Business, the Center’s Summiting the Pyramid project created a methodology that scores analytics against the pyramid of pain and changes the way we think about detection engineering by scrutinizing the components within the analytic. This methodology shifts the advantage to defenders, even as adversaries evolve, and allows us to change the game on the adversary.

So how do we consistently change the game on the adversary regardless of how they try to evade our analytics? How can we craft analytics that catch more of the adversary’s behaviors and don’t require constant updates even as adversaries use a new tool or change their network infrastructure?

Start with robustness

Robustness measures the effort needed by an adversary to evade analytics. Using the Pyramid of Pain as our map, the higher a defender climbs, the greater the robustness of an analytic. A robust analytic requires the adversary to spend a lot to evade it, forcing them to operate at higher levels.

To measure the robustness of an analytic, we apply three steps:

  1. Identify the lowest level of the OS that triggers the events the analytic uses
  2. Score the robustness of each element of the analytic
  3. Combine the scores of each element to compute the robustness of the analytic

Furthering our process, we broke down the concept of robustness into two broad categories:

  1. The data used for detection
  2. Where in the operating system the data originates
Figure 2: Deconstructing the Pyramid of Pain

For the classification of data, we care most about who has primary control: the adversary or the defender.

  • Ephemeral values which are easy for an adversary to change: When the adversary has control, they can easily change observables low on the Pyramid of Pain (e.g., file names or IP addresses). We corral such easily changed observables and label them ephemeral data.
  • Tools used by adversaries during an attack: The adversary also has a wide selection of tools or malware that they may introduce into an environment. Data that is under adversary control has high variance. If high variance data is used for detection, it results in an analytic with a low robustness score (e.g., Mimikatz, schtasks.exe).
  • Behaviors demonstrated by an adversary: In the best case for robustness, an analytic would be based on behaviorally invariant data, which refers to actions caused by the adversary that do not change, regardless of what tools or other observables are used. If your analytic is monitoring solely for invariant, or low variance, behaviors, we assign a high robustness score to that analytic, barring a change to the operating system’s behavior through an update or a patch (e.g., LSASS memory access, service creation)

Once the data is classified, we assign a numerical score from 1 to 5, from the low robustness of ephemeral values to the high robustness of behaviorally invariant data that ideally detects all known implementations of a technique.

In addition to the type of data, we also consider where in the operating system detected events originate. Our research on how monitoring at these levels would affect robustness is ongoing, but for now we assign letter values:

  • Application (A)
  • User (U)
  • Kernel (K)

Level up your analytics

As a result of our research to develop a methodology and score analytics against the Pyramid of Pain, we created guidance to make analytics more robust. Along with that guidance, there are examples of scored analytics, as well as steps on how to improve the score of an analytic. For a deeper dive into our research and to learn how to improve your analytics, please visit the Summiting the Pyramid website.

What’s next?

Our research in this area is ongoing. We made some assumptions to scope our research and will continue to test these assumptions with future work. For now, here they are:

  • Behaviorally invariant features exist for some ATT&CK techniques, but not for all.
  • Behaviorally invariant features should exist for any ATT&CK technique that is based on a function of the operating system itself — e.g., scheduling a task.
  • The defender has detailed knowledge about where a particular event is generated within the operating system.

In future research, we will examine how to increase precision, and reduce false positives, while still retaining a high level of robustness. We will also examine how to define the robustness of a network analytic, as well as how using multiple analytics together increases or decreases overall robustness. We will generalize the methodology by extending the work to other platforms. We will also attempt to automate the process of scoring analytics.

Get Involved

We would love to hear about how you’re using our work! If you have any feedback or contributions you’d like to make to the project, please email us at ctid@mitre-engenuity.org or submit an issue via Github!

Acknowledgements

We would like to thank Specter Ops, Ultimate Windows Security, and the SigmaHQ team. We relied heavily on Sigma’s analytic repository, Specter Ops’ Capability Abstraction analysis, and the Windows event information captured by Ultimate Windows Security to underpin the technical foundation of the research concepts in our work, and we are grateful for their ongoing contributions to the open cybersecurity community.

About the Center for Threat-Informed Defense

The Center is a non-profit, privately funded research and development organization operated by MITRE Engenuity. The Center’s mission is to advance the state of the art and the state of the practice in threat-informed defense globally. Comprised of participant organizations from around the globe with highly sophisticated security teams, the Center builds on MITRE ATT&CK®, an important foundation for threat-informed defense used by security teams and vendors in their enterprise security operations. Because the Center operates for the public good, outputs of its research and development are available publicly and for the benefit of all.

© 2023 MITRE Engenuity. Approved for Public Release. Document number CT0078

--

--

Jon Baker
MITRE-Engenuity

Director and co-Founder, Center for Threat-Informed Defense