Automations in Continuous Vulnerability Management

Hüseyin Altunkaynak
Insider Engineering

--

What is Continuous Vulnerability Management?

Organizations must continually evaluate the vulnerabilities in their assets to narrow and minimize the window of opportunity for attackers and should develop a plan to monitor security vulnerabilities. For this purpose, public and private sources should be monitored for new threats and vulnerabilities, and if possible monitoring and action steps must be carefully assessed and automated.

Why Continuous Vulnerability Management is Important?

Organizations are under constant attack by attackers who seek to find vulnerabilities in their infrastructure and whose purpose is to exploit these vulnerabilities and gain unauthorized access to critical areas. In order to prevent them; Timely availability of software updates, patches, security framework best practices, and threat intelligence information is critical. For this reason, every organization should identify its own security vulnerabilities before attackers and regularly review its environments and assets. While it is easy to say what needs to be done, identifying, managing, and eliminating vulnerabilities is an ongoing, systematic activity that requires time, attention, and a focus on resources.

Organizations must continually evaluate the infrastructures they use for security and proactively address the vulnerabilities they discover. Organizations that do not implement these processes put their infrastructure, assets, and the information of their employees and end users at serious risk. In order to prevent these undesirable situations and for effective vulnerability management; the Automation of monitoring and action steps will create a more effective vulnerability management framework by minimizing human errors and wrong decisions. Enabling automations in vulnerability management is very useful for the effective use of a critical resource such as time, by enabling quick action in both the vulnerability discovery process and the first response process.

Specifying the Scope

Subfinder

Link: https://github.com/projectdiscovery/subfinder

Subfinder is a subdomain discovery tool that discovers valid subdomains for websites by using passive online sources. It has a simple modular architecture and is optimized for speed. Subfinder is built for doing one thing only, passive subdomain enumeration, and it does that very well. Although the domains we scan belong to us, there are domains that we cannot automate pulling subdomains of domains. We use the subfinder tool for such domains.

Httpx

Link: https://github.com/projectdiscovery/httpx

Httpx is a fast and multi-purpose HTTP toolkit that allows running multiple probes using the retryable http library. It is designed to maintain result reliability with an increased number of threads.

After the explanations of the tools we use, let us explain how we developed automation. As with the pentest, the recon stage is required first. First, we start by finding the subdomains. There are two ways for this; The first, going to the panel where we manage DNS and pulling CNAME and A records from there. The second, extracting CNAME and A records from different sources using tools such as subfinder.

We need to get the specified records on a daily basis because every day a different microservice is developed and published. Since it will be difficult for people to follow these records, they should be able to be taken before each scan with the help of automation. The API offered by the DNS management panel makes our work easier at this point.

We also have some domains for that we cannot pull CNAME and A records from the DNS management panel. We use subfinders for these domains. Before each scan, we run the subfinder and pull the current records from different sources.

After the records are taken, we use the httpx tool. We find out which ports of domains/subdomains are running web services with the httpx tool. We pass this step easily because it works fast and in the high thread.

Vulnerability Scan

Nuclei

Link: https://nuclei.projectdiscovery.io/

Nuclei is used to send requests across targets based on a template, providing zero false positives and providing fast scanning on a large number of hosts. Nuclei offers scanning for a variety of protocols, including TCP, DNS, HTTP, SSL, File, Whois, Websocket, Headless, etc. With powerful and flexible templating, Nuclei can be used to model all kinds of security checks. Their templates are developed both by themselves and by the community. The fact that it is written in Go and is actively developed by the community is the main reason we chose this tool.

After determining the scope to be scanned by automation, we use vulnerability scanning tools. We use two different toolkits while scanning for vulnerabilities. The first set of tools consists of open-source vulnerability scan tools. The other set of tools consists of purchased vulnerability scan tools.

We also provide the output from subdomains and open web ports as input to the nuclei tool. We are running the nuclei tool, which we have strengthened with custom templates, in three different modes.

The first mode is about running the new template on all assets when a new template is added. With this implementation, when a new vulnerability occurs, we can quickly control all our assets and take action. The second mode is the severity scan, which is run to quickly finish the scan and detect if there is a critical vulnerability. The last mode is bulk scan, where all templates are run. Although the last output is a bit long, it allows us to perform a detailed check on our domains and subdomains.

We also use purchased vulnerability scan tools. We store the outputs of these tools by passing them through certain filters.

Fuzzing

Ffuf

Link: https://github.com/ffuf/ffuf

Ffuf is a tool sponsored by Offensive Security. Written in Go and being fast, is the biggest factor in choosing this tool for fuzzing. It uses clusterbomb mode by default for fuzzing. Apart from the clusterbomb mode, there are also pitchfork and sniper modes.

Bfac

Link: https://github.com/mazen160/bfac

BFAC (Backup File Artifacts Checker) is an automated tool that checks for backup artifacts that may disclose the web application’s source code. The artifacts can also lead to the leakage of sensitive information, such as passwords, directory structure, etc. Apart from Wordlist-based fuzzing, it also uses special algorithms to find backup files created specifically for applications.

We have finished the vulnerability scan. Now there is another recon method, fuzzing. It can also be run before a vulnerability scan. In order not to tire the systems too much, it may be preferable not to perform different vulnerability scanning on the same day. In tools that need a lot of requests like Fuzzing, we do not switch to another target after we try all the payloads on a single target. On the contrary, we try one payload on all targets and then switch to another payload. In this way, we do not get tired of our applications, and we can finish the scans faster.

We encounter too many false-positive outputs by fuzzing. In order to avoid this situation, we made a few improvements during the reporting phase. The first of these is the elimination of repeating content lengths. If there are too many repetitive content lengths, we can say that the application processes error codes differently. If an application that should return 4xx or 5xx in the status code specifies this status code in the body, fuzzing tools will not be able to detect it. For such cases, we clean all of the repetitive content lengths if they exceed a certain threshold.

In addition to the wordlist scanning we do with ffuf, we also use the bfac tool. Bfac is one of the tools that should be used in the scans with too many domains, as it scans by creating a domain-specific wordlist.

The wordlist we use in fuzzing is as important as the tool we make fuzzing. We use a wordlist that we have compiled and deduplicated from different sources. In this way, we were able to catch critical vulnerabilities in different systems before they came alive. In addition to the specified wordlist, there are also wordlists used by friends in the team in their tests. If we try to run all wordlists all the time, the accessibility of the applications will decrease. Short and significant wordlists are used more frequently for fuzzing. Less important and lengthy wordlists are used less frequently for fuzzing.

Detecting Secrets

Gitleaks

Link: https://github.com/zricethezav/gitleaks

Gitleaks is a SAST tool for detecting and preventing hardcoded secrets like passwords, API keys, and tokens in git repos. Gitleaks is an easy-to-use, all-in-one solution for detecting secrets, past or present, in your code.

One of the latest tools included in our automation is the Gitleak tool. It continues to be developed by the Community. It is not difficult to add new detection as it has a working logic based on regexes. That’s why we added regexes for our organization’s private keys. In this way, we were the first to notice any leak that may occur in public/private repositories and take quick action.

How We Handle Troubleshoot in Tools

Every application developed by humans has bugs. We also added a try-except block to our automation as below. In this way, we were able to monitor errors. After adding the code block below, we realized that there were too many bugs in our automation. We fixed the bugs quickly. In this way, we learned that we need to monitor the errors and we switched our other automations to the same structure.

https://gist.github.com/huseyince/f763fe847e5542c5294f1a9656240997

Who Checks Scan Results

At first, because we automatized the nuclei tool, there wasn’t much output. As we started to include fuzzing and other outputs in the checklist, some findings were more likely to be overlooked. As a precaution against this, we started to check the outputs with teammates on a weekly basis. Thus, the finding that was overlooked by one tester could be noticed by the other tester.

Internal-External Vulnerability Scan

As in most institutions, the applications that are open to the world are kept behind the WAF (Web Application Firewall). People who want to access our application from outside can access our application through WAF. This restriction also affects the output of automated tools. Therefore, vulnerabilities that cannot be accessed from the outside and are known to exist inside cannot be found. As we know this, we have added VPN on and off features to our automation. This feature works in weekly periods. Our automation has been able to protect us against threats from both inside and outside.

Reports Archive

Too many independent tools are running in our automation. As a matter of fact, there are many outputs produced by these tools. We create meaningful reports from these outputs. We also store these reports that we create on a daily, monthly and annual basis. In this way, we can access old reports and make detailed analyzes.

Scan Timeline

There is a fine line between scanning apps for accessibility and security. Your assets open to the Internet are exposed to natural and potentially harmful requests during the day. Doing these scans yourself will keep you one step ahead of the attackers. However, you should not block the accessibility of the application while performing these scans. Therefore, we have made the tools we use periodically so as not to interrupt the systems.

References

--

--