MITRE released the results for Round 2 of their EDR evaluation scenario, this time emulating APT29. As you might have seen, nearly every vendor associated with the evaluation has issued a press release pronouncing their clear effectiveness and decisive victory over the competition. I want to avoid the marketing fluff and jump right into the data. What follows is an explanation of how I quantified the results, with layers of nuance that I hope will help customers find the right fit for their situation. Rather than provide a one-size fits all scoring methodology, I broke down results with clear lines of separation between human derived detection and machine only. If you’re trying to better understand the market or looking to make a choice for a new EPP tool, what follows should be especially relevant to you. I’ve provided the Github link to code and scoring files at the bottom. …


Jonathan Ticknor

Security Data Science Executive

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store