Introducing secator: The Pentester’s Swiss Knife
A few days ago, my company Freelabz released secator
on GitHub:
Introduction
secator
is a Python-based swiss-knife tool that standardizes input / output for many recon (& others) tools that you use daily, like ffuf
, subfinder
, nmap
, nuclei
, … and many others.
secator
is also a workflow automator: we have a set of out-of-the-box workflows that you can use (run secator w
to list them); and you can write custom workflows as well in YAML format.
Why we (you ?) need secator
Input / output hell
One of the thing that slows us down when doing pentesting is to always switch back-and-forth between dozens, if not hundreds, of different tools, command-line interfaces, graphical interfaces… each one having it’s own set of input options and output formats.
For instance, you want to find URLs using several fuzzing tools like ffuf
, dirsearch
, and feroxbuster
so that you can maximize your chances of finding something.
For each of those tool you will have to:
- Remember the target flag (was it
-target
,-host
, or simply a positional argument ? how about the other flag used to pass a file ?) - Remember the option name for common run options like rate limit, timeout, follow redirects, wordlist, depth of search, proxy… (what was it,
--rate-limit
,-rate
or--max-rate
?) - Remember the output formats (was it
-json
or-j
or-jsonl
or--json
? Wait, did this tool even support JSON output ?)
You’ll end up typing never-ending commands like:
ffuf -noninteractive -recursion -u http://testphp.vulnweb.com/FUZZ -json -r -rate 100 -t 50 -timeout 4 -recursion-depth 2 -w /usr/share/seclists/Fuzzing/fuzz-Bo0oM.txt
feroxbuster --auto-bail --no-state --output test.json --url http://testphp.vulnweb.com --json --redirects --rate-limit 100 --threads 50 --timeout 4 --depth 2 --wordlist /usr/share/seclists/Fuzzing/fuzz-Bo0oM.txt
dirsearch -o test.json -u http://testphp.vulnweb.com --format json --follow-redirects --max-rate 100 --threads 50 --timeout 4 --max-recursion-depth 2 --wordlists /usr/share/seclists/Fuzzing/fuzz-Bo0oM.txt
Withinsecator
we have basic input / output principles:
- Targets are always a positional argument, they can be a single target, multiple comma-separated targets, or a file containing one target per line:
secator x <TASK_NAME> <TARGETS> <OPTIONS>
- Input options are mutualized amongst tools of the same category. For instance, you can use
-rl
a.k.a--rate-limit
for all tools supporting a rate limit. - Output format is mutualized and always structured: JSON lines (
-json
), JSON (-o json
), CSV (-o csv
), Google Sheets (-o gdrive
), you can pick.
That means you can use the same set of options to run the 3 tools above using the x
(execute) command:
secator x ffuf http://testphp.vulnweb.com/FUZZ -rl 100 -timeout 4 -frd -depth 2 -o json
secator x feroxbuster http://testphp.vulnweb.com -rl 100 -timeout 4 -frd -depth 2 -o json
secator x dirsearch http://testphp.vulnweb.com -rl 100 -timeout 4 -frd -depth 2 -o json
… but you now have the same arguments for all tools, and the same output format; which is great to keep your mind free of those implementation details.
Keeping our work and results organized
Pentesting is doing a lot of manual work. And that means we can get disorganized rapidly…
We often lose track of our result files or terminal outputs in the depth of our hard drives.
Questions that often pop up after a pentesting or recon sessions are:
- Where was that result file saved again ?
- Did I even save this tool’s run results as JSON ?
- What was this output file named again ?
- How can I consolidate or merge my results to make the report my client wants ?
With secator
those questions are a no-brainer since the results of tasks and workflows are structured, sorted and conveniently placed in your reports folder.
Pick your output format once, don’t think about it ever again.
Optimizing your pentesting sessions
One thing I’ve noticed about the audits and pentesting sessions I have been running is that there are usually four phases (general recon, mass results analysis, target recon, exploitation):
With secator
we aim to optimize those four phases and improve our productivity in each one:
Phase I: General recon — where we always use more-or-less the same set of commands, even though the input options might change a bit across engagements: different rate limit, different HTTP requests timeouts, different wordlists, to use or not use proxies…
Using secator
workflows this set of commands you use to do rapid recon on targets can be formalized as a YAML config.
For instance, our previous set of URL-fuzzing tasks can be formalized like this:
type: workflow
name: url_fuzz
alias: urlfuzz
description: URL fuzz (slow)
tags: [http, fuzz]
input_types:
- url
tasks:
_group:
dirsearch:
description: Find URLs with dirsearch
feroxbuster:
description: Fuzz URLs with feroxbuster
ffuf:
description: Fuzz URLs with ffuf
targets_:
- type: target
field: '{name}/FUZZ'
… which we would run using:
secator w url_fuzz http://testphp.vulnweb.com
… and tweak the mutualized input options as we need:
secator w url_fuzz http://testphp.vulnweb.com -rl 100 -frd -screenshot
… which will instruct all tools to keep a rate limit of 100 requests per minute, to follow redirects and to take screenshots of the visited pages.
You can specify a custom directory where secator
looks for additional YAML files using the SECATOR_EXTRA_CONFIGS_FOLDER
environment variable.
Phase II: Mass results analysis — where we take a deep look into our tools results to see what we’ve found that what could potentially be vulnerable targets, interesting API endpoints, suspicious HTTP responses, …
Using secator
workflows, your results are saved in structured format, and organized by output types, which will make it a whole lot easier to review, analyze, and manipulate:
- All workflow results are saved as JSON and CSV files by default.
- You can use workspaces using the
-ws <WORKSPACE_NAME>
flag to keep results for the same targets under the same folder. - You can save results to MongoDB using the
-driver mongodb
with theMONGODB_URL
environment variable and use advanced MongoDB queries to explore your results.
Phase III: Targeted recon — where we focus on one area that we’ve identified as interesting in the previous phase, and we run some digger analysis using either different tools, or by tweaking our input options to be more precise, “on target”; with the goal of finding a vulnerability.
Using secator x <TASK_NAME> <OPTIONS>
you can run any single tool integrated in secator
and customize the run to your liking using the available run options.
Phase IV: Exploitation — where we focus on one promising vulnerability, find its related exploits and try to use those exploits. One problem here is that we often get hundreds of loosely-related exploits that are not necessarily usable and lead to a considerable waste of our time.
Using secator
you can correlate CVE vulnerabilities with their related exploits, and even exploit vulnerabilities using some of the integrations:
nmap
scripts to lookup exploits, and check for CPE matches to increase your confidence of finding usable exploits.searchsploit
to search exploits based on a particular CPE or product name and version.msfconsole
to run exploits directly (this is currently in alpha stage).dalfox
to run XSS attacks.
You can put any of those capabilities into any workflow by simply editing your workflow YAML file.
Getting started
The fastest way to get started is to use pip
to install secator and then explore all the available subcommands yourself:
pip install secator
Now start exploring the CLI:
secator x # list all available tasks
secator x ffuf --help # list ffuf options
secator x ffuf http://testphp.vulnweb.com/FUZZ # launch a fuzzing task
secator w # list all available workflows
secator w host_recon --help # list host_recon options
secator w host_recon testphp.vulnweb.com # launch a host recon workflow
Once you have the basics of using secator
, you can start adding more options to customize how you run your tasks / workflows.
Here is an example using more options:
secator w host_recon testphp.vulnweb.com -rl 100 -proxy auto -p 80,443 -tags ssl,network -screenshot -o table,json,csv -ws vulnweb
Let’s breakdown this command to understand what each of those options are:
When in doubt, remember to use the --help
to understand the meaning and details of each option:
secator w host_recon --help
A 5 minutes pentesting session
Let’s take http://testphp.vulnweb.com for a secator
run …
1. Crawling URLs
We start with a very simple url_crawl
workflow that will run cariddi
, gau
(offline URLs), katana
, and httpx
to find HTTP endpoints and web pages.
secator w url_crawl http://testphp.vulnweb.com -screenshot -headless -rl 100
-screenshot
is passed to take a screenshot image of the visited pages (option forkatana
)-headless
is passed to use a headless chrome browser to crawl instead of a static curl client-rl 100
to specify a rate limit of 100 requests per second.
By default workflow results are shown on the CLI and also stored in JSON and CSV formats.
At first glance the results already show some interesting data:
- Some tags coming from
cariddi
finding LFI / SQLi parameters at /showimage.php?file=./pictures/7.jpg - Interesting endpoints found by
katana
and fingerprinted byhttpx
like:
- /hpp/?pp=12 with the page title ‘HTTP Parameter Pollution Example’
- /search.php?test=query
2. Finding patterns from interesting URLs
Without digging too deep for now, let’s run a url_vuln
workflow that will run gf
on our interesting URLs to tag them with patterns (e.g: XSS / RCE / LFI patterns in path), and then run dalfox
if on XSS-vulnerable patterns.
Let’s save the 3 URLs we found interesting in a vulnerable_urls.txt
file and run the url_vuln
workflow with the file as input:
secator w url_vuln vulnerable_urls.txt
We see gf
tagging some of those URLs for XSS. After this, dalfox
runs and ends up finding a verified XSS, and even gives us the payload the reproduce it in our browser.
Going deeper
Once you are comfortable with using secator
like a semi-pro, we really want our fellow pentesters to be able to dig deeper into it and tweak it to their special needs…
We want to foster a community where pentesters can become contributors, without necessarily needing to understand advanced Python code.
The complete documentation will help you unlock some of the steps to go deeper:
- Writing new tasks
- Writing new workflows
- Increasing workflow execution speed using dynamic targets, concurrent tasks and distributed runs with Celery
Conclusion
We hope you have fun exploring secator
and you spread the words to your fellow hackers, pentesters, security researchers and co-workers.
Cheers and happy new year !