Introducing ATT&CK Evaluations Trials: First Up, Deception

Frank Duff
MITRE-Engenuity
Published in
5 min readNov 3, 2021

Coming off our announcement last week that we will be conducting an ATT&CK Evaluations for Managed Services, we are announcing another evaluation opportunity for interested capability providers. Today we announce the ATT&CK Evaluations Trials program, which is a collaborative research program where we will conduct more tailored and focused evaluations for different types of technologies that don’t fall squarely in our current evaluation programs.

When MITRE Engenuity ATT&CK® Evaluations launched in 2018, participation began with a focus around the endpoint protection and detection markets. This made sense given the methodology was created to provide clarity around ATT&CK coverage, which these markets are directly trying to address.

At the same time, during each Call for Participation, we receive many inquiries from companies who want to get evaluated with respect to ATT&CK but offer different value propositions than the majority of our participants who largely fall in the EDR/EPP market. There are many security solutions that focus their capabilities on a specific set of ATT&CK techniques or offer a non-detection-oriented defense. These vendors are concerned (and likely rightfully so) when people do their inevitable declaration of winners and losers, that their unique use case or scope will be completely lost on the end users, thus making them look “bad,” while they are just different.

Diversity and depth of security solutions is critical to protecting our networks. There are many different types of solutions that all claim to protect you. Unfortunately, in most cases, their true value in defending known adversary behavior is still unanswered. It is for this reason that ATT&CK Evaluations is starting a new program, called ATT&CK Evaluations Trials. Trials is a research focused expansion to the ATT&CK Evaluations landscape, where we will work with vendors to develop new evaluation methodologies that will be able to better capture their value propositions in an honest and transparent way. Each trial will have different objectives, different designs, and different outputs, but all will maintain the public good mission that you have become familiar with through our evolution from MITRE to MITRE Engenuity.

To give you an idea of how this ATT&CK Evaluations program works, let me introduce our first Trial research project:

ATT&CK Evaluations Trials: Deception

Deception technology offers a unique value to organizations seeking to understand adversary behavior. It can dramatically increase analyst confidence in detection via high fidelity tripwires, causing the adversary to waste time, money, or capability, and potentially provide us critical new insights into adversary behavior. Each of these use cases starts to put power into the defenders’ hands when they have long since been forced to be reactionary. But while there is potential value, it is also technology that offers unique challenges for threat- informed evaluations.

As we thought about how to construct a deception methodology that would provide meaningful results to end-users, articulate key differences in product strategies, and do so in a way that wouldn’t oversell the effectiveness of these solutions to affect the adversary, we began to converge on two main questions:

  1. Did the adversary encounter the deception (i.e., could the deception capability affect the adversary)?
  2. Did the adversary engage the deception (i.e., did the deception capability affect the adversary)?

Whether the adversary encountered the deception is a very objective question to answer from a threat-informed perspective. Run the adversary technique, as they would, and record whether the adversary would see something different within the environment than they would without the deception deployed. Engagement, on the other hand, is much harder to measure. This is because there is a very human element to the question.

● Did they engage it out of happenstance, or did they make the conscious decision to pick it because it seemed the better target?

● Would they have engaged the deception again if they were presented with the same choice again?

● Would a different tester make the same choice?

● Would that choice change if they were aware, or not, that there was deception technology deployed?

● Was the effect a short-term inconvenience, or did it affect their long-term mission?

When it comes to representing results, we encounter another challenge of how we can represent the outcomes in a universally fair way. Vendors measure success differently due to the variety of products in the market. For products that focus on providing high confidence tripwires, surface level believability might suffice. Other products that want to have adversaries interact with them to cause them to waste time/resources will require a higher level of believability to keep the adversary engaged.

To help us determine how to articulate diverse value propositions in a way that minimizes the subjective, human evaluator components, we need to understand the problem space. It’s for this reason we consider this a research project. With participating vendors, we will collaborate to collect data on the purpose, utility, believability, and overall effectiveness of deception technology. We will try to identify common measures that would allow us to talk about products in a similar language, while still appreciating their unique capabilities and target use cases.

Attivo Networks and CounterCraft Security have both already confirmed their participation in this research program. We welcome others to join us to advance the adoption and understanding of deception capabilities. Interested parties should register by the end of November to accommodate early 2022 execution. Should you wish to participate or have an idea for a future ATT&CK Evaluations Trial program, please contact the team.

© 2021 MITRE Engenuity. Approved for Public Release. Document number AT0026.

--

--

Frank Duff
MITRE-Engenuity

Frank Duff (@FrankDuff) is the Director of ATT&CK Evaluations for MITRE Engenuity, providing open and transparent evaluation methodologies and results.