Internal Red Teams and Insider Knowledge

A red team’s job is to simulate an adversary. There are pros and cons of hiring consultants versus full time employees to carry out this task, but I won’t be discussing that today. Instead, let’s focus on confirming or busting a myth that an internal (full time employee) red team should not have “insider knowledge” that the blue team has.


The elephant in the room on this issue is trust. If you are in the blue team and you do not trust your red team to have inside knowledge, you are likely concerned that they will “abuse” this knowledge. If you are concerned “red team will win” then go back to square one and resolve the trust issue with that team. Both teams should have the same mindset: secure the organization. Mission details vary, but the intent should be the same. Red should take the extra humble approach here, even if the conflict is on the blue side.

If this is the sticking point, solve it with some discussion in a casual, non-work location and a team building event.

A ton of synergy can happen from a well-oiled red and blue team pair. Advancements in security detection/response capabilities can increase far faster together than separately.

Threat Models

What category of threat actor is the red team emulating?

If it’s a malicious insider, chances are the “internal information” is fair game — not just that, but knowledge of a weak internal process or architecture may be exactly what enables the malicious insider to be successful in the first place. In which case, you would be at fault to keep that knowledge from your red team so that they can simulate the activity and blue can hone their detection/response capabilities.

Real Adversaries Won’t Know That Information

If it’s a criminal group, how much “inside information” can be enumerated or gleamed from running multiple failed attempts, getting attack infrastructure burned, and rebuilt for a second run at the same target, this time with the new knowledge of a security control that foiled the previous attempt? Don’t discount a talented adversary fixated on a goal.

Revolving Doors

If your organization is large and not a startup yesterday, chances are there have been personnel changes: moves, adds, changes, terminations. If a weak control exists and could have been known by a former employee, then an adversary may have that knowledge, so your red team should, too. Just like companies that occasionally have their internal business plans leaked to the public, humans may be a sieve when it comes to security controls. If a former employee could have known it, don’t keep it from your red team.

Good Luck Keeping It From Them

Let’s suppose you read all the way to this point and you STILL think red shouldn’t have any “inside information.” Okay. How many campaigns are you going to let red run before you completely turn over the staff? Only one? Because that’s all it may take for red to learn. The more iterations, the more information red will learn. Hopefully, they learn things nobody knew before — that’s part of the point: discovery of weaknesses to be corrected. Would you believe an internal red team that said they had zero inside knowledge after even just a few years of this work? I wouldn’t …

Every Day Low Cost

You may be thinking: “Exactly! I want them to learn this the hard way, just like an attacker would!” but you’d be missing the expense of this. No two groups of attackers (even if one is a simulated red team attacker) will go about the enumeration of vulnerabilities the same way, and there is a definite cost to exhaustively searching the environment to locate them. An optimization — a “hack” even — is to let your internal team have a leg up with some inside knowledge and then trace backwards to show a path to that point. This is just like when you solved mazes as a kid, starting at the finish point and working backwards because you came up with a solution faster. In this case, the point is to determine if there are paths, not which path is correct, so it’s even more important.

A Note to Internal Red Teams

The responsibility to use internal knowledge correctly is completely yours. Provided that you maintain the correct mindset — that your goal is to improve the organization through adversarial approaches to problems that may long have been considered “solved” — you will be fine. Use the inside knowledge but tie it back to the objectives of the campaign. Always focus on the path to enumerate this information and do not skip the step where you replicate the adversary enumerating this internal information, especially if the most honest, fair replication of this activity is very noisy. Do it — your goal is not to win; your goal is to protect the organization. A defense that detects an adversary at the enumeration step is a defense you can be proud of.

Use your best judgement. Perhaps there is a more plausible path an adversary will take without the inside knowledge. Or perhaps there is a high payoff for a former insider to use this knowledge to cause harm. Regardless of which path you choose, do so deliberately with full conscious awareness of why you chose it.