When Scale Outpaces Human Intervention, It’s Not a People Problem
Manual processes are rubbish. People are slow. I don’t mean this as a pejorative, but we’ve spent years building faster machines, embracing containerization, and growing the scale upon which our operating environments exist to the point where the processes we rely on to maintain our security posture can no longer be performed at human speed. Many of us have written code in our careers to automate processes, yet, for years we’ve identified this problem as a lack of available talent in the workforce.
The Importance of Codifying Processes
Day to day manual processes are often mundane and error prone due to people taking shortcuts whether intentionally or otherwise. When presented with technology such as Security Orchestration Automation and Response (SOAR) it’s important to understand that there’s two types of automation that are embedded in this acronym. The Automation piece often is easy to grasp because of the familiar “lazy developer” use case of “you need to do something you don’t want to have to do again so you write a script.” Orchestration on the other hand allows us to codify processes to ensure that defined processes are followed while creating optimizations in the process by automating procedure — eliminating lag in the processes through parallelization and reducing the individual handoffs that may introduce lag into the process. Consider the process of onboarding a new employee from the moment they accept an offer to their first day of employment; user accounts have to be created, laptops or workstations need to be requisitioned, potentially office space needs to be reserved… the list goes on. I’ll pause for a moment and highlight this particular use cases is leveraged within Palo Alto Networks for user onboarding with the goal to impress upon you that SOAR use cases are not limited to security alert management use cases and anecdotally ask the question “has SOAR been pigeonholed by the naming convention of including security in the acronym?” It is my belief that any repeated process within an organization that has a defined antecedent, be it an alert or any other impetus, creates a viable use case for consideration with SOAR.
Reflecting on Automation
I often talk to customers about a familiar book many of us had on our desks 20 years ago, Programming Perl. Perl, for all its sins, provided an interpreted scripting language that enabled admins to write once and run on the plethora of systems we were running at the time from Windows to innumerable SystemV and BSD variants. A quick survey of common SOAR products in the market shows that it’s not uncommon for these products to have well over 300 individual product integrations, with one well over 800.The ability to automate against everything in your environment with these products is unparalleled in our lifetime.
XDR and the Hidden SOAR
XDR caught many of the EDR vendors by surprise. In the market today, we’re seeing two distinct strategies for providing an XDR solution: collecting telemetry ahead of an alert being generated for the purpose of improved analytics, and collecting data from related systems in response to an alert. While this latter strategy does nothing for improving the analytics available on collected data (because data is collected after detection), this is an interesting market approach in trying to solve critical SOAR use cases identified by Gartner. Of course, this leaves out a broader approach to using SOAR products, but does largely give you an out of the box SIEM + SOAR solution for that platform.
But Back To The Initial Problem… Scale
A lot of this blog has been focused on SOAR, but I want to dig into another huge consequence of digital transformation/cloud adoption, is that you very likely have assets outside of your allocated IP space. Sadly, many of us in the security space have spent years relying on vulnerability management scanners to help us with asset identification and management, but they only scan what you know about. We’re living in a world of automation where tools such as Zmap can scan the entire internet, probing for vulnerabilities in under 45 minutes. A recent Unit 42 Ransomware Threat Report found that the average ransom demands in 2022 were roughly $2.2M. The hallmark days of nation state adversaries being uniquely tooled as a result of funding are gone, making asset management as critical as ever. It’s an imperative in this cloud adopted/as-a-Service world we’re living in that you have a real time understanding of the assets you’re responsible for protecting. This can only be done effectively through automation. You can’t rely on Nessus to find the exposures you have if you’re not pointing it at them, and unlike nation states, ransomware operators may not care who they have compromised until you’ve been popped. The only answer for finding these external exposures is external attack surface management (EASM), and to paraphrase a lesson as many of us learned on Saturday mornings many years ago… knowing is [ONLY] half the battle. Being able to respond to possible exposures in real time using a common automation framework used throughout your security organization to assess and protect these potential exposures provides efficiency in skillsets and hiring that multiple point automation products can not.
In closing, we’ll never have the budgets and staff we need to secure our infrastructures against well funded adversaries who are automating against us until we learn to embrace automation ourselves. People are slow.