ATT&CK Evaluations for Enterprise: Carbanak+FIN7 Welcomes 30 Participants with a Site Update
Earlier this year we provided additional details on our ATT&CK Evaluations for Enterprise Carbanak and FIN7 evaluation, and also announced that the ATT&CK Evaluations program was moving to MITRE Engenuity, MITRE’s tech foundation for public good. Today we complete our transition with an update to our site and a formal welcome to the participants of this upcoming round of evaluations. As part of MITRE Engenuity, our objectives, ideals, and commitment to help vendors and end-users alike to understand the current state of capabilities to defend against adversary behavior as described by ATT&CK remains unchanged. We will continue to extend our evaluations and release new content to allow everyone to benefit from the work.
The Carbanak and FIN7 evaluations are underway. We are extremely pleased to announce that 30 vendors answered this round’s call for participation. This year also marks the first time an optional Protections scenario is available, and 18 of those 30 vendors have chosen to participate in that as well.
The full list of participants is as follows (* indicates protections participant): AhnLab*, Bitdefender, BlackBerry Cylance*, Broadcom*, Check Point, Cisco*, CrowdStrike*, Cybereason*, CyCraft*, Cynet*, Elastic, ESET*, F-Secure, Fidelis, FireEye, Fortinet*, GoSecure, Kaspersky*, Malwarebytes*, McAfee*, Micro Focus, Microsoft*, OpenText, Palo Alto Networks*, ReaQta, SentinelOne*, Sophos*, Trend Micro*, Uptycs, VMware
The hands-on execution of the evaluations will run through the remainder of the year. We anticipate that the results, as well as our emulation methodology, will be released sometime in early 2021. We are look forward to working with so many innovative solution providers, both past participants as well as new ones.
Vendor Comparison Tool
In addition to a new look and feel, we have released a Vendor Comparison Tool on the new site. The tool was inspired by an end-user story we received a while back. In this story, the user was trying to select between two products who had both participated in our evaluations. To assess
the products’ performance in the ATT&CK Evaluations, they opened up the All Results view from the both vendors in separate browser windows, and then scrolled through them side-by-side to compare the results. While it was great to hear the story of how someone was using our results, it was disappointing to hear how manual the process was. We decided to develop a tool so that others who want to use our results similarly don’t have to be burdened with multiple browsers and manual analysis.
Vendor Comparison Tool supports both rounds of evaluations that are currently released. The tool is built to highlight differences in detection categories. We do not say anything in regard to whether that difference is for the better or worse. You should consider what matters to you when assessing the differences. Are you expecting a certain technique to have a certain detection type or is any detection sufficient? Is an alert beneficial or is it likely to lead to false alarms for your analyst to deal with? This tool shows there is a difference to draw your attention, but it is for you to analyze the differences taking into account your own needs.
Another point to note is that the Vendor Comparison Tool only takes into account the detection categories associated with the detection. Two technique detections will be similar in terms of context provided to the analyst, but how that context was generated might differ in terms of logic or the data sources that were used to generate the detection. So again, while we draw attention to differences, there is much more you should consider that our detection notes will help you understand. Hopefully this proves a useful catalyst to enable you to better use our results and compare vendor performance.
As this story has highlighted, this is perfect example of why we ask for feedback so often. One user’s story drove the creation of a new tool that can hopefully impact a much larger community. I want to thank them for sharing their story, as well as encourage you to also reach out to us if you are using the content we provide. You use this feedback — both what works and what doesn’t — to improve what we provide for the community.
ICS Evaluations
Last month we announced the first signups for our inaugural ATT&CK Evaluations for ICS, as well as an extension to the call of participation which will now run through October 30th, 2020. The initial set of vendors includes Armis, CyberX (a Microsoft company), Dragos, the Institute for Information Industry, and Kaspersky. Together, and with any additional participants whom may signup before the deadline, we will be able to better understand ICS anomaly and threat detection software capabilities, as well as evaluation methodologies that can apply to this unique and important domain. You will now find ATT&CK Evaluations for ICS content on our website alongside the Enterprise content. Additional information will be released as it becomes available. We are very excited to begin this collaborative research project. Please reach out to us if you would like more information or to join the cohort.
What’s Next?
As stated above, we are focused on broadening our impact. This includes growth into new market segments, refining evaluation methodologies, and modifying the content we provide to lower the barrier of entry and improve usability. This starts with collecting and presenting data in a way that will allow users to get to the nuance of detections, but also not require you to be an expert to use them. As always, we welcome feedback and ideas. In the meantime, we will explore some of the ideas we have to improve the accessibility of the data.
ATT&CK Evaluations isn’t just about the results. We hear from many people who are using our emulation plans, scripts, and do it yourself content to perform their own evaluations. Together, with the Center for Threat Defense, we will try to advance the art of adversary emulation and enable organizations to be threat-informed.
© 2020 MITRE Engenuity. Approved for Public Release. Document number AT0006.