MITRE Engenuity ATT&CK® Evaluations: Managed Services — Round 2 (2023) Call for Participation

Ashwin Radhakrishnan
MITRE-Engenuity
Published in
5 min readMar 16, 2023

We are excited to officially open the Call for Participation for MITRE Engenuity’s ATT&CK® Evaluations: Managed Services — Round 2 (2023)! As shared in our OilRig (2022) release, we will be holding these evaluations annually. As always, we prioritize providing as much public utility as possible with each Evaluation to help contribute to understanding the efficacy of the people, process, and technology relevant to security programs built to address adversary behavior.

Goals for this Evaluation

Our Evaluations are structured as research projects and are purposed with catering to the following goals:

  1. Empower end-users with objective insights into how to leverage specific commercial cybersecurity capabilities to address known adversary behaviors.
  2. Provide transparency around the true capabilities of commercial security offerings to address known adversary behaviors.
  3. Drive the security vendor community to enhance their offerings to better address known adversary behaviors.

If you are a security practitioner or leader who is constantly inundated with marketing material, it can become a burden to sift through the dense collateral and discern which security solution is the best fit for your organization. Ultimately, in the land of a million great options, which commercially available Managed Service will best help you realize your security strategy? We hope that Evaluations are one of the many data points contributing to your final decision. If this is the first time you have interacted with the Evaluations content, we suggest reading this article that was published alongside the Managed Services (2022), OilRig results on November 9, 2022.

If you are a Managed Services provider who is looking to help your current and prospective clients understand more about how your service addresses known adversary behavior, Evaluations can be a phenomenal platform to showcase your service and highlight the nuances of your offering to your audience. Moreover, as we deliver on our third goal, we hope to continue influencing the community to emphasize features and functionality that continuously improve and address known adversary behavior. Many of our participants join our Evaluation to discover where to prioritize enhancements in their roadmap. When you have multiple high priority feature requests from multiple high priority clients, participation in Evaluations may serve as a backdrop to understand priority as it contributes to strategic delivery. For the full scope of a released Evaluation, review the Managed Services (2022), OilRig round Overview Page to learn more.

Extensions to our Methodology

In the interest of providing added value to our community, we are always looking for ways to extend our methodology. In this Evaluation, we are looking to answer the following questions as we march toward execution:

What are ways we can make the scenario more complex?

  • As will always be the case, Managed Services Evaluations will be run in a black box format. The purpose of employing this methodology is to ensure that we genuinely evaluate the ability of the service, rather than the tools deployed to fulfill the service. Without sharing the specifics of the Emulation Plan, we are working on making the scenario used for the adversary emulation far more complex than the last round. This will allow us to capture and publish results that help the community understand the nuances of each Managed Services participant.

What are some quantitative metrics we can capture and publish this round?

  • In the OilRig round, our methodology included designations for three Reporting Statuses to provide context for each step of our Emulation Plan: Reported, Not Reported, and Not Applicable. We are looking to publish even more context in the results this round. Can we publish a volume metric on the content that participants send to us? Can we capture important metrics, like Mean Time to Detect, consistently and objectively? With these extensions to our methodology, we aim to equip our community with the tools they need to make purchasing decisions.

How do we approach qualitative information collected in each Evaluation?

  • Supporting qualitative information in Evaluations results is always tricky. Our foremost principle is objectivity, and we, therefore, ensure we do not editorialize throughout the Evaluation, especially in our published results. While that will always be true, we can collect data that may lend itself to qualitative information, which can be published upon release. For instance, did the participant appropriately attribute the reported activity to a specific adversary? Were there remediation suggestions provided in the content sent as part of the Evaluation? These types of data points will help the community understand the qualitative context of each participant’s performance in the Evaluation.

How do the answers to these questions lead to improvements to our Evaluations Platform?

  • Performing the best Evaluation possible is key; the results must also be presented in the most meaningful format so our community can digest them effectively. Simply put, the aforementioned areas we are extending to mean little if we cannot make the results more usable. To that end, we want to significantly improve our UI to support all the new areas we are investigating. This includes supporting the qualitative and quantitative data on the Evaluations Platform and improving existing tools for analysis.

While we have well-formed answers to the questions above, we are eager to connect with our community to validate our strategies. To that end, if you would like to be part of the design process for these improvements, please contact us at evals@mitre-engenuity.org, and we will set up a time between March 27-April 14, 2023 to discuss. We will be engaging with as many community members as possible in that timeframe.

Next Steps

This Call for Participation will close on June 30, 2023, with an iterative update scheduled for May 10, 2023. We are approximating that the Execution Phase commences in Q4 of 2023, and the results will be published in Q2 or Q3 of 2024, depending on our cohort size. If you are interested in participating in this Evaluation, please fill out this form, and we will get back to you with additional details as soon as possible.

Lastly, we have devoted many resources to learning from our community and ensuring we innovate in ways that resonate with our stakeholders. We will be sharing content regarding our progress on that front very soon. As always, we appreciate the collaboration, and look forward to another exciting Evaluation!

© 2023 MITRE Engenuity, LLC. Approved for Public Release. Document number AT0043

--

--