Deception — turning the tables on your adversaries

A lot of the tooling that companies use to defend their assets focuses on the upper part of the kill chain … very little however is usually available on the very last piece of that chain; Actions on Objective.

I work in an environment where samples of malware are plentiful — we detonate a lot of stuff in our sandboxes and feed the generated IOCs back into our security pipeline but I felt that we were missing out on something important — full exploitation.

By full exploitation I mean actually letting the adversary gain a foothold, doing internal recon, credential harvesting, lateral movement, exfil … all the good stuff. Why? Well, I theorised that this might generate some very valuable insights into a given adversaries TTPs (Tools, Techniques, Procedures) once inside an organisation.

This in turn would generate clues that could be used for hunting, detection, remediation and even classification of adversaries. At the very least it might help to support our current assumptions about what we should be looking out for. One might argue that red teams could be used for exactly that purpose (and this is somewhat true) but nothing beats the real thing.

So I devised an experiment; if I could create an believable enough environment and then detonate selected samples from our sandboxes while I was watching them… would someone fall for that ?

TL;DR They did. And it proved rather interesting :)

So I setup what appears to be a small branch office of a larger organisation, complete with real Active Directory controllers, Fileservers, Printservers, Mail server, Desktops and Users in an isolated environment.

This was a lot more work than I would like to admit — perhaps because I got a bit carried away with the “being realistic” side of things.

The Desktops for example where real, physical machines where I would log in and pretend to do real work … writing docs, switching between programs, reading news and logging into my fake Facebook account and my fake Gmail ( in all honesty I would coast around on my office chair and type on multiple keyboards, moving the mouse, logging in and out of machines to simulate people going to lunch, taking breaks, etc. during an active exploitation).

The idea was to exceed what most good Sandboxes do for behavioural analysis and for an extended amount of time while presenting realistic surroundings. To my amazement it actually worked!

The network between the different machines was highly instrumented and I would take memory dumps of the machines the attackers where on with regular intervals. I was careful not to populate machines with potentially “suspicious” software that might have deterred the adversary (sysmon, et al).

The first time was … amazing. Watching an active intruder “performing” for me was kinda cool. Seeing them doing a quick recon to get their bearings, download more stuff to fortify their positions, harvesting credentials, attempting (and sometimes failing) lateral movement. I did this a couple of times … and continued to tweak the setup to make it even more believable.

One particular adversary payed no attention to the cert chain on the machines … so I was able to successfully do SSL inspection of their C2 traffic :)

I learned a lot from these few full-exploitation trials and found some neat tools/tricks that I had not seem before but there is room for improvement still :)

I would love to start subjecting an active attacker to certain kinds of stimuli and see how they tackle certain problems or issues, how they work around certain obstacles and if or when they might give up the fight entirely by steadily increasing the difficulty level.

Essentially I would love to turn active adversaries into lab rats and subject them to controlled experiments.

Based on how they respond you might even have a way to classify adversaries based on technical excellence/arsenal, determination, persistence and even their ability for creative thinking — this would allow risk scoring of groups or individuals and all kinds of neat things.

By doing this would will be able to do a lot of damage — why? TTPs are the hardest part for an adversary to change … they can switch recon methods, file-hashes, exploits, machines, networks and sometimes even tools but once you get inside their head you have the upper hand … at least for a while.

Unless of course they know that you don’t know that they know … :)

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.