Getting Started with ATT&CK: Assessments and Engineering

Andy Applebaum
MITRE ATT&CK®
Published in
11 min readAug 1, 2019

Over the last several weeks we’ve published posts on getting started with ATT&CK by using it for threat intelligence, for detection and analytics, and for adversary emulation. In part four of our mini-series, we’re going to talk about assessments and engineering, showing how you can use ATT&CK to measure your defenses and enable improvement. In many ways this post builds upon the prior ones, so check them out if you haven’t already!

To make this process more accessible — and following along with the other posts — we’ve broken this post down into three levels based on sophistication and resource availability:

  • Level 1 for those just starting out who may not have many resources,
  • Level 2 for those who are mid-level teams starting to mature, and
  • Level 3 for those with more advanced cybersecurity teams and resources.

Getting started with “assessments” might sound frightening at first — who enjoys being assessed? — but ATT&CK assessments are a part of a larger process to provide useful data to security engineers and architects justifying threat-based security improvements:

  1. Assess how your defenses currently stack up to techniques and adversaries in ATT&CK,
  2. Identify the highest priority gaps in your current coverage, and
  3. Modify your defenses — or acquire new ones — to address those gaps.
The assessment and engineering process.

The levels for assessments and engineering are cumulative and build on each other. Even if you consider yourself an advanced cybersecurity team, we still encourage you to start at Level 1 and walk through the process to ease into a larger assessment.

Level 1

If you’re working with a small team that doesn’t have access to lots of resources and you’re thinking of doing a full assessment, don’t. The idea of right away creating a color-coded heatmap of the ATT&CK matrix that visualizes your coverage is appealing, but is more likely to leave you burnt out on ATT&CK than excited to use it. Instead, start small: select a single technique to focus on, determine your coverage for that technique, and then make the appropriate engineering enhancements to start detecting it. By starting this way, you can practice how you’d run a larger assessment.

Tip: Not sure which technique to start with? Check out Katie’s blog post for tips on how you might use ATT&CK and threat intelligence to choose a starting point.

Once you have a technique picked out, you’ll want to figure out what your coverage of that technique is. While you can use your own rubric, we suggest starting with the following categories of coverage:

  • Your existing analytics will likely detect the technique;
  • Your analytics won’t detect the technique, but you’re pulling in the right data sources to detect it; or
  • You’re not currently pulling in the right data sources to detect the technique.

Tip: When first starting out, keep your scoring categories simple: are you able to detect it or not?

A great way to get started on measuring coverage is to look for analytics that might already cover a technique. This can be time consuming, but well worth the effort: many SOCs already have rules and analytics that might map back to ATT&CK, even if they weren’t originally designed to do so. Oftentimes you’ll need to bring in other information about the technique, which you can get from the technique’s ATT&CK page or an external source.

As an example, suppose we’re looking at Remote Desktop Protocol (T1076) and we have the following alerts:

  1. All network traffic over port 22.
  2. All processes spawned by AcroRd32.exe.
  3. Any processes named tscon.exe.
  4. All internal network traffic over port 3389.

Looking at the ATT&CK technique page for Remote Desktop Protocol, we can quickly see that rule #3 matches what’s specified under the “detection” header, and a quick web search shows that port 3389 — specified by rule #4 — also corresponds to the technique.

Detection text for Remote Desktop Protocol.

If your analytics are already picking up the technique, great! Record your coverage for that technique and then pick a new one to start the process again. But if you’re not covering it, look at the data sources listed on the technique’s ATT&CK page and determine if you might be already pulling in the right data to build a new analytic. If you are, then it’s just a question of building one out.

But if you’re not pulling in the right data sources, what should you do? This is where engineering comes into play. Take a look at the data sources listed on the technique’s ATT&CK page as a possible starting point, and try to gauge the difficulty for you to start collecting each of them versus the effectiveness of how you’d be able to use them.

Tip: A frequently cited data source is Windows event logs, which provide visibility into many ATT&CK techniques. A good resource for getting started with event logs is Malware Archaeology’s Windows ATT&CK Logging Cheat Sheet, which maps Windows events to the techniques you could detect with them.

The 97 out of 244 ATT&CK techniques that can be detected with process command-line parameters, which can be ingested via Windows event 4688.

Graduating to the next level: Don’t stop at one technique — run through this process several times, picking a new technique (or two) across each tactic for each run. Keep track of your results using the ATT&CK Navigator, which is great for generating heatmaps of ATT&CK coverage. Once you feel comfortable with the process, perform a data source analysis and come up with a heatmap of which techniques you could detect given the data sources you’re pulling in. Some resources that can help you get started here include Olaf Hartong’s ATT&CK Datamap project, DeTT&CT, and MITRE’s own ATT&CK scripts.

Level 2

Once you’re familiar with this process — and have access to a bit more resources — you’ll ideally want to expand your analysis to span a reasonably large subset of the ATT&CK Matrix. Additionally, you’ll likely want to use a more advanced coverage scheme to now account for fidelity of detection as well. Here we like to recommend bucketing coverage into either low, some, or high confidence that a tool or analytic in our SOC will alert on the technique.

Sample for what a final assessment might look like.

Tip: Don’t worry about pinpoint accuracy when trying to assess your coverage — your goal with assessments is to understand if you have the engineering capabilities to generally detect techniques. For more accuracy, we recommend running adversary emulation exercises.

This expanded scope makes analyzing analytics slightly more complex: each analytic now can potentially map to many different techniques, as opposed to just the one technique from before. Additionally, if a technique is covered by an analytic, you’ll want to tease out the analytic’s fidelity as well.

Tip: For each analytic, we recommend finding what it’s keying in on and seeing how that maps back to ATT&CK. As an example, you might have an analytic that looks at a specific Windows event; to determine this analytic’s coverage, you can look up the event ID in the Windows ATT&CK Logging Cheat Sheet or a similar repository. You can also use the ATT&CK website to analyze your analytics — the figure below shows an example of searching for detection of port 22, which shows up in the Commonly Used Port ATT&CK technique.

ATT&CK site search for port 22

Another important aspect to consider are the Group and Software examples listed along with a technique. These describe the procedures, or specific ways, an adversary has used a technique. Oftentimes they represent variations of a technique that may or may not be covered by existing analytics and should also be factored in to a confidence assessment in how well you cover a technique.

Examples section of Windows Admin Shares

In addition to looking at your analytics, you’ll also want to start analyzing your tools. To do this, we recommend iterating through each tool — creating a separate heatmap for each — and asking the following questions:

  • Where does the tool run? Depending on where a tool is running — e.g., at the perimeter or on each endpoint — it may do better or worse with specific tactics.
  • How does the tool detect? Is it using a static set of “known bad” indicators? Or is it doing something behavioral?
  • What data sources does the tool monitor? Knowing the data sources a tool monitors lets you infer which techniques it might detect.

Answering these questions can be hard! Not all vendors publish this kind of information, and oftentimes when you hunt for it you’ll wind up finding marketing material. Try not to spend too much time getting bogged down with the specifics, opting instead for painting broad strokes about general coverage patterns.

To create a final heatmap of coverage, aggregate all of the heatmaps for your tools and analytics, recording the highest coverage over each technique. Once you have this, you’ll want to turn towards improvement — as a first step, we like to recommend a more advanced version of the analytic development process we mentioned earlier:

  1. Create a list of high-priority techniques that you want to focus on in the short-term.
  2. Ensure you’re pulling in the right data to start writing analytics for the techniques you’re focusing on.
  3. Start building analytics and updating your coverage chart.
Start with your current coverage, add analytics, and update your coverage accordingly.

You may also want to start upgrading your tools. As you’re analyzing documentation, keep track of any optional modules that you might be able to use to increase your coverage: if you do came across any, look into what it would take to enable it on your network, and balance this with the coverage it offers. If you can’t find any additional modules for your tools, you can also try to use them as alternative data sources. As an example, you might not be able to install Sysmon on each of your endpoints, but your existing software might be able to forward relevant logs that you might not otherwise have access to.

Graduating to the next level: Once you start implementing some of these changes and improving your coverage, the next step is to introduce adversary emulation, and in particular, atomic testing. Each time you prototype a new analytic, run a matching atomic test and see if you caught it. If you did, great! If you didn’t, see what you missed, and refine your analytic accordingly. You can also check out our paper on Finding Cyber Threats with ATT&CK-based Analytics for more guidance on this process.

Level 3

For those with more advanced teams, a great way you can amp up your assessment is to include mitigations. This helps moves your assessment away from just looking at tools and analytics and what they’re detecting to looking at your SOC as a whole.

A good way to identify how you’re mitigating techniques is to go through each of your SOC’s policies, preventative tools, and security controls, then map them to the ATT&CK technique(s) they may impact, and then add those techniques to your heatmap of coverage. Our recent restructuring of mitigations allows you to go through each mitigation and see the techniques it’s mapped to. Some examples techniques with mitigations include:

Mitigations for Brute Force (left) and Windows Admin Shares (right).

Another way to extend your assessment is to interview — or informally chat with — others who work in your SOC. This can help you better understand how your tools are being used, as well as highlight gaps and strengths you might otherwise not consider. Some example questions you might want to ask include:

  • What tools do you use most frequently? What are their strengths and weaknesses?
  • What data sources are you unable to see that you wish you could see?
  • Where are your biggest strengths and weaknesses from a detection perspective?

Answers to these questions can help you augment the heatmaps you made earlier.

Example: If you previously found a tool that has a lot of ATT&CK-related capabilities, but personnel are only using it to monitor the Windows Registry, then you should modify that tool’s heatmap to better reflect how it’s being used.

As you talk to your colleagues, look at the tool heatmaps you had previously created. If you’re still not satisfied with the coverage your tools are providing, it may be necessary to evaluate new ones. Come up with a heatmap of coverage for each prospective new tool and see how adding it helps enhance your coverage.

Tip: If you’re particularly well-resourced, you can stand up a representative test environment to test the tool live, recording where it did well and where it didn’t do so well, and how adding it would impact your existing coverage.

Lastly, you may be able to decrease your reliance on tools and analytics by implementing more mitigations. Look at mitigations in ATT&CK to gauge if you can practically implement them. Consult your detection heatmap as part of this process; if there’s a high-cost mitigation that’ll prevent a technique that you’re doing a good job of detecting, it’s may not be a good trade-off. On the other hand, if there are low-cost mitigations you can implement for techniques that you’re struggling to write analytics for, then implementing them might be a good use of resources.

Tip: Always weigh the potential loss of visibility when investigating removing detections in favor of mitigations. Make sure you have some visibility in cases where a mitigation or control may be bypassed so those events are less likely to be missed. Detection and mitigation should both be used as tools for effective coverage.

In Closing: Where Assessments and Engineering Fit In

Assessing your defenses and guiding your engineering can be a great way to get started with ATT&CK: running an assessment provides you with an understanding of where your current coverage is, which you can augment with threat intelligence to prioritize gaps, and then use to tune your existing defenses by writing analytics.

Long-term, you shouldn’t envision yourself as running an assessment every week, or even every month for that matter. Instead, you should keep a running tab on what your last assessment was, updating it every time you get new information, and periodically running adversary emulation exercises to spot-check your results. Over time changes in the network and what’s collected may have unintended consequences that reduce the effectiveness of previously tested defenses. By leveraging ATT&CK to show how your defenses stack up to real threats, you’ll be able to better understand your defensive posture and prioritize your improvements.

Visualization of ATT&CK use cases

©2019 The MITRE Corporation. ALL RIGHTS RESERVED. Approved for public release. Distribution unlimited 18–3288–12.

--

--