Stop Trying to Take Humans Out of SOC … Except … Wait… Wait… Wait…

Anton Chuvakin
Anton on Security
Published in
3 min readMar 1, 2021


This is about the Security Operations Center (SOC). And automation. And of course SOC automation.

Let’s start from a dead-obvious point: you cannot and should not automate away all people from your SOC today. Or, as my esteemed colleague said, “Stop Trying To Take Humans Out Of Security Operations.”

Despite this point being dead-obvious today, I want to present a few arguments to further support it — it will be clear why in the end…

We need humans because the attackers are humans with their own creativity, irrationality, weirdness, etc. As one vendor once said, “you have an adversary problem, not a malware problem.” We need to hunt and not rely solely on automated systems for things like detection — hence humans are a must. This side of the argument boils down to “we need humans because the attackers are [also human].” This argument is also enhanced by arguments “why robots suck for security” and all that.

So good automation is a “force multiplier,” not a force replacer. Admittedly, some tasks — like data enrichment — are better done by machines, and humans can — and should — be rid of them. The point is to remove some tasks from humans and not to remove the humans from the SOC [entirely].

Furthermore, bad automation kills. This is due to a perennial problem that also plagues the use of ML/AI in security: garbage in — garbage out. This problem is further boosted with the fact that today’s automation logic (whether for detection or remediation) is just not smart enough for the complex world of IT around it. So, neither the data quality, nor the algorithms measure up. This is all true, while “cybersecurity is the most intellectually demanding profession on the planet.”

ED209, the most famous “failure of security automation” from Robocop (1990)

So. Convinced? Sure. But let’s continue on our journey…

All the while, there are more and more voices for more automation. Their logic is also very understandable. We need automation, because we need to scale better and go faster, we have too much data, alerts, signals, threats, etc. There is much to be said about the value of various forms of automation in security (in general) and in security operations (in particular).

However, as I said above, to keep the discussion sane we always remind ourselves that trying to take the humans out of SOC is more or less insane.

Still with me? OK, but now you’d be somewhat surprised where our journey will suddenly turn…

Now, go and imagine the following scenarios:

  • You face the attacker in possession of a machine that can auto-generate reliable zero day exploits and then use them (an upgraded version of what was the subject of 2016 DARPA Grand Challenge)
  • You face the attackers who use worms for everything, and these are not the dumb 2003 worms, but these are coded by the best of the best of the offensive “community”
  • Your threat assessment indicates that “your” attackers are adopting automation faster than you are and the delta is increasing (and the speed of increase is growing).

Would you still say the same? Would you still give the same advice? All these are very hypothetical in 2021, to be sure, but what about 2025? 2030? 2035?

Frankly, you can cheat and say “the middle way is the way: humans need to work with machines.” And things would feel nice for a moment, until you realize this is what chess players said sometime after their first rout in 1997. There was a concept of human+machine chess that looked really awesome in 1998–2015, but then was quickly and mercilessly killed by the improving neural networks. Naturally, one may counter that chess is mathematically solvable while information security is not (by a wide, wide, wide margin). Sure, this argument holds water …today.


Today, I will still also say “Stop Trying To Take Humans Out Of Security Operations” but somewhere in the very back of my mind, a scary and cold uncoiling worm of doubt is born …

Thanks to Brandon Levene for a great discussion and some text contributed to this post.

Thanks to Dave Aitel for the disruptive ideas that triggered me to write this.