Responsible Red Teams
This is a topic that I’ve wanted to write about for some time now. There are people in the InfoSec industry, and specifically in the penetration testing and red team space, that won’t like what I have to say. I’ve tried to figure out some other way to say it — but gave up. This is a “sorry, not sorry” in advance.
Red Teams are not attackers. They are simulated attackers. Therein lies a huge difference. While all of InfoSec has a “candidate shortage” for all the open positions, there are always lines out the door and around the block for red team jobs. Maybe it’s Hollywood delusions of grandeur. Yes, it’s a cool job (I love my job and my team), but some red team candidates may need to check their motives for doing this kind of work.
With motives properly checked, we can remember that as simulated attackers, we are really still defenders. We pretend to be attackers. We mock up attack scenarios. The best of us go to great lengths to look like the real thing, and their stories inspire me and my team to go further as well. But we are still defenders. We have responsibilities to our employers, our clients/customers, our stakeholders, and the consumers/employees whose data is the treasure we chase and loot.
There are lots of things that RESPONSIBLE red teams will do that IRRESPONSIBLE red teams will not do. The list is exhaustive. This article is not an exhaustive list, but it’s a start. I hope it inspires you to pause and consider the extent to which you are being responsible. Closely related to some “responsibility” in some of these actions is “maturity” … in other words, when I see some “irresponsible” actions, I also will — admittedly — assume that it means the actor is immature.
There will also be exceptions to these things. There are times when CALCULATED IRRESPONSIBILITY is necessary in order to properly reflect a specific threat actor or class of threat actors. I get that. But those are the exception, not the rule. If you operate within the realm of these irresponsible actions daily, you’re probably not reflecting a specific type of threat actor — you’re being lazy.
Our goal is to decrease risk, not increase it.
So in no particular order …
· Responsible red teams protect data. That means they use encrypted protocols for C2. And for exfiltration they encrypt the data, maybe just by downloading over an encrypted protocol, but at least by encrypting the data at the file level before sending it over an unencrypted protocol.
· Responsible red teams also authenticate their C2 servers.* Regardless of the tools they drop, they ensure the callbacks out over the internet are reliably resilient and hardened against a random third party taking over their shells.
· That means that responsible red teams don’t use netcat for forward or reverse shells. Knock that off. Maybe you need to use it once for an internal lateral movement between two highly controlled, non-mobile endpoints (two servers in a datacenter). Maybe. But that should be a last resort. Or do it only because of a specific blue team training objective. And make sure you have read-in a “trusted agent” from the blue team leadership to ensure it can be monitored and not abused in the process.
· That also means responsible red teams don’t use metasploit, nor metasploit pro — unless they are ALWAYS using paranoid mode. Sorry. I know this may be your favorite tool, but without paranoid mode, meterpreter payloads do not properly authenticate the server calling back out. I can hear your “but…” objections now. It’s for a short duration, or you’re very careful, etc. What happens when that artifact you drop in the target environment gets missed during cleanup and actually executes a year later? “… but we always clean up …” Who’s going to have meterpreter running at w.x.y.z then? Maybe nobody. Maybe somebody. It’s 2018, there are better options. (Keep in mind, metasploit was never really intended to be a C2 tool.)
· Responsible red teams don’t throw C2 servers all over virtual private cloud providers. Yes, it’s cool, your neat little red team automation script can deploy Empire and Cobalt Strike servers all over Digital Ocean, Linode, AWS, Azure, Google Cloud, with zero clicks … but why? There is a reason why the InfoSec community-as-a-whole churns every time we read about highly sensitive data running “on somebody else’s computer” (as the T-Shirt describes what “the cloud” really is). When you’re operating in your customer’s target environment, did you really think you wouldn’t access their most treasured secrets? Or are you just an insensitive jerk and don’t respect their data or their customers’ and employees’ data? How well did you clean up? Did you get a contract with that cloud provider stating they were going to protect your customers’ data? Did they sign on to be an extension of your customer’s PCI cardholder data environment?
· Responsible red teams install their C2 servers in private data centers with all of the regular management overhead around them, and then deploy simple TCP redirectors at various parts of the internet to give the appearance of having this highly distributed infrastructure. And guess what … when you do that, your “cloud infrastructure” is typically just a couple lines of shell script (if that) per redirector host. If it gets “burned” and needs to move, it’s a 2 minute manual task — probably as fast as your infrastructure automation task. Faster if you automate that very simple 2 minute task.
There are many, many more things responsible red teams do. This is just a start.
For those hiring consulting red teams
Before you sign the SOW, ask them how they protect your data and their implants on your hosts. Ask them about the data center where your credentials and sensitive data will end up. Is it theirs with good controls around it, or is it YOLOSEC that they will spin up on a dozen $5/month cloud provider hosts (littered all over the internet) on Day 1 of the engagement? What types of tools will they use? Don’t accept “we cannot divulge that” as an answer. Make them spell it out. They can say things like “commodity tools, like Empire or Cobalt Strike, with our modifications” or “our custom tooling.” Challenge them on how they perform authentication, how they simulate data exfiltration, and level set on the specifics of training objectives.
* There is one clear and justifiable exemption to the authenticated C2 callbacks that I am distinctly aware of, but it doesn’t apply to most red teams, and even those that it does, it’s not justifiable for “all” of their ops. Nation State actors do not wish to be attributed, which is why you see no authentication in all of the toolsets that dropped with the Eternal Blue leak, as an example. Authentication == Attribution for them. For those red teams emulating nation states, this may be an exception. For the rest of you, knock it off.
[Update: 2/22/2018 — I conceded on Twitter that our review of paranoid mode’s authentication was incomplete — operator error on our part. It *can* be configured to properly authenticate a callback from a meterpreter payload using a form of certificate pinning, but that requires the operator perform a very careful and planned setup. I almost never observe my industry peers showing that step in their write-ups discussing metasploit as a C2 tool for red teams.]