Just recently I stumbled over a Twitter poll created by Andrew Thompson asking if defenders (blue team) should show the simulated adversaries (red team) how they caught them after an exercise.
I was one of the few that answered “no” and gave some explanation.
Andrew pointed out that my actual answer is “it depends” as plenty of red teams do their work outside of public few and he’s right.
But let me list other reasons why I think that the permanent cat-and-mouse game of red and blue teams, each showing the other team all secrets seems to be worthwhile and effective but is not — and how it could be improved.
Problem 1: Single Point(s) of Failure
Red and blue teaming is designed to reflect the real struggle between defenders and real world adversaries. A real world adversary needs to land a blow in each segment of the kill chain to reach his goal, as does the red team.
The red team task is to reach that goal (e.g. domain admin rights on root domain) just like a real world adversary would at some time. It doesn’t have to e.g. explore all possible forms to deliver a dropper to a spear phishing target, it just has to discover one.
For each stage of the kill chain, only one working method is required to the get to the next link in the chain. Sometimes more than one method is discovered but that often happens by accident or due to automation.
A red team may achieve the final goal and tell the blue team everything about the methods used in each stage, but what does that tell us defenders?
- Did we learn about the <most probable> method of exploitation in each stage? No.
- Do we <reduce the attack surface> significantly by applying counter measures that close that single loop hole? No.
So, what did the red team prove? The proved that they were able to find their custom path over the various stages of the kill chain. That’s it.
Don’t get me wrong! The blue team learns from that exercise — that’s out of question. But is the most effective and comprehensive form in which the potential of a red team can be harnessed?
Problem 2: Focus on Killchain
Red teams often focus on a kill chain that’s too short — in other words — they reduce the exercise on methods to find their way in.
However, in the real world, persistent threats often get detected when adversaries get to the actual work, jumping from system to system seeking valuable information that they were ordered obtain, as well as the collection and exfiltration huge amounts of data.
Numerous incidents were uncovered by operational monitoring reporting full disk partitions, massive outgoing DNS requests originating from an internal server or high CPU usage caused “rar.exe” process on a web server in the DMZ.
Many red teams limit their operation on the “way in”, reach a target system, get maximum rights and sometimes establish a backdoor. They know how to obfuscate a dropper, but do they train, simulate and obfuscate the exfiltration of GB of data? Did they get the task to take over the domain or did they get the task to retrieve newest blueprints of development department X?
This brings me to the next issue.
Problem 3: Incomplete Simulation
Red teams should simulate an adversary in the most authentic manner — but this includes clumsy and noticeable behaviour as well, doesn’t it?
The red teamers that I know are often highly skilled and payloads that I find during my work and research on Virustotal are often distinguishable from real world adversaries tool sets. I asked myself “why is that?” and tried to put my impressions into words.
- Obfuscation is often custom made
- Unique forms of evasion
- Tendency to MSF/Veil framework with some custom encoding / layer of obfuscation
- Bigger PowerShell toolset and reuse of work done by other researcher (e.g. Assembly load code snippets)
Red teams should simulate an adversary with all its strengths and weaknesses, not be a better one. They should:
- study and use the toolset of well-known threat groups — even if that means that they get detected while using a decade old nbtscan or htran (I am not kidding, you really should)
- apply different levels of clumsiness and to achieve that, study APT reports provided by the threat intelligence community — yes, this means that you should activate the Guest account and add it to the local administrators group even if your soul revolts against it.
- set realistic goals because a complete take over of the top level domain has never been the actual goal of any threat group out there. Some red teams already do this right. Set goals like “retrieve all internal and confidential information about the call for public tenders of customer/project X” or “get screenshots of a management console of an X-Ray in the baggage handling system of the airport terminal X”.
What would actually help us defenders?
As a conclusion I’d like to add my thoughts on how red teaming could be improved from my viewpoint of a defender that have not been previously stated in the chapters above.
(Note: the phrase “red teamer” is most likely wrong in that context; I’ll still call them “read teamers”, knowing that the team that I describe has a different colour or no color at all)
As a defender I’d like to get:
- a more complete picture of the weaknesses and loopholes in each stage of the kill chain — yes, this takes a lot of time, but I believe that red teamers’ time is better spent than constantly finding new paths through the stages with every iteration of the game
As stated above: more authentic simulations by …
- using the attackers tool sets even if they’re old, lack creativity and a personal touch of custom obfuscation
- studying threat reports and adding clumsiness even when the red team knows and can do better
The reason why I demand these changes is not that I fear the fight (I win anyway 😽). The reasons is that in the heat of the moment people lose sight of the actual red team’s purpose — not to win the game, but to train the blue team for the real fight.
That’s the red team’s whole purpose: “train the blue team” — not: “win the game.”
The blue team’s purpose is: “catch the real threat!” — not: “teach the red team how you caught them so that they know how to evade your detections in the next exercise”.
Sure, you can play that overt cat-and-mouse game until both teams reach a very high degree of knowing how the other team operates, but from my point of view, red and blue team’s time could be more efficiently spent in better setups and wider scopes.
After some discussions and comments on Twitter, I now have a much clearer picture of red teams’ purpose and limits.
First, there has been a mix-up of “what I saw red teams do” and “what red teams actually should do”. I was told that “proper red teams” already do a sound adversary simulation.
There has also been a misunderstanding on the real value provided by red teaming. I, and I guess that many customers too, always believed that red team exercises would help you discover gaps in your detection. This false believe led me to the recommendation that tests of protection/detection coverage in the different stages of the kill chain would be extremely helpful.
“Proper red teamers” showed me that red teams’ primary goal isn’t the discovery of detection gaps, but rather the discovery of shortcomings in the reaction itself, processes, coordination, communication and the like. You get an incomplete list of detection gaps as a mere byproduct. The main purpose is to discover blue teams internal weaknesses.
I wrote this blog post from my viewpoint of a 1.) defender that helped customers closing detection gaps in the past, 2.) incident responder that often stumbles over traces of red team activity and 3.) threat intel analyst that often finds red teams’ scripts, implants and payloads pretty distinguishable from real world examples.
I recommend you read the discussions on Twitter to get a better, more complete picture:
Note 1: I didn’t include operational shortcomings like the lack of documentation provided by red teams. I think that the solution is straightforward and don’t have to be discussed in public.
Note 2: You may find the list of problems incomplete. You’re probably right, but it’s Saturday and I have a car that demands to be cleaned.
Final note: First I thought I could do this in a series of tweets. Good that I decided otherwise.
Resolution: it was a red teamers idea to pad the executable with “M” characters. I’ve just noticed it the other day.