Recon Everything

Oct 8 · 29 min read

Bug Bounty Hunting Tip #1- Always read the Source Code

Approach a Target

• Ideally you’re going to be wanting to choose a program that has a wide scope. You’re also going to be wanting to look for a bounty program that has a wider range of vulnerabilities within scope.

• Mining information about the domains, email servers and social network connections.

• Dig in to website, check each request and response and analysis that, try to understand their infrastructure such as how they’re handling sessions/authentication, what type of CSRJSF protection they have (if any).

• Use negative testing to through the error, this Error information is very helpful for me to finding internal paths of the website. Give time to understand the flow of the application to get a better idea of what type of vulnerabilities to look for.

• Start to dig into using scripts for wordlist bruteforcing endpoints. This can help with finding new directories or folders that you may not have been able to find just using the website.
This tends to be private admin panels, source repositories they forgot to remove such as /.git/ folders, or test/debug scripts. After that check each form of the website then try to push client side attacks. Use multiple payloads to bypass client side filters.

• Start early. As soon as a program is launched, start hunting immediately, if you can.

• Once you start hunting, take a particular functionality/workflow in the application and start digging deep into it. I have stopped caring about low hanging fruits or surface bugs. There is no point focusing your efforts on those.

• So, let’s say an application has a functionality that allows users to send emails to other users.

• Observe this workflow/requests via a proxy tool such as Burp. Burp is pretty much the only tool I use for web app pen testing.

• Create multiple accounts because you would want to test the emails being sent from one user to another. If you haven’t been provided multiple accounts, ask for it. Till date, I have not been refused a second account whenever I have asked for it.

• Now, if you are slightly experienced, after a few minutes of tinkering with this workflow, you will get a feeling whether it might have something interesting going on or not. This point is difficult to explain. It will come with practice.

• If the above is true, start fuzzing, breaking the application workflow, inserting random IDs, values, etc. wherever possible. 80% of the time, you will end up noticing weird behavior.

• The weird behavior doesn’t necessarily mean you have found a bug that is worth reporting. It probably means you have a good chance so you should keep digging into it more.

• There is some research that might be required as well. Let’s say you found that a particular version of an email server is being used that is outdated. Look on the internet for known vulnerabilities against it. You might encounter a known CVE with a known exploit. Try that exploit and see what happens (provided you are operating under the terms and conditions of the bug bounty).

• There might be special tools that are required. Explore into that, if possible. Remember, Burp is a swiss army knife but you might have to use certain specific tools in certain cases. Always, be aware of that.

• After spending a few hours on this, if you think you have exhausted all your options and are not getting anything meaningful out of it, stop and move on. Getting hung up on something is the biggest motivation killer but that doesn’t mean you are giving up. Get back to it later if something else comes up. Make a note of it.

• Something that has worked for me is bounds checking on parameters, pick a parameter that has an obvious effect on the flow of the application.
For example, if a field takes a number (lets call it ID for lulz).
What happens if:
-you put in a minus number?
-you increment or decrement the number?
-you put in a really large number?
-you put in a string or symbol characters?
-you try traverse a directory with …/
-you put in XSS vectors?
-you put in SQLI vectors?
-you put in non-ascii characters?
-you mess with the variable type such as casting a string to an array
-you use null characters or no value

I would then see if I can draw any conclusions from the outcomes of these tests,
-see if I can understand what is happening based on an error
-is anything broken or exposed
-can this action affect other things in the app.

• Focus on site functionality that has been redesigned or changed since a previous version of the target. Sometimes, having seen/used a bounty product before, you will notice right away any new
functionality. Other times you will read the bounty brief a few times and realize that they are giving you a map. Developers often point out the areas they think they are weak in. They/us want you to
succeed. A visual example would be new search functionality, role based access, etc. A bounty brief example would be reading a brief and noticing a lot of pointed references to the API or a particular page/function in the site.

• If the scope allows (and you have the skillset) test the crap out of the mobile apps. While client side bugs continue to grow less severe, the API’s/web-endpoints the mobile apps talk to often touch parts of the application you wouldn’t have seen in a regular workflow. This is not to say client side bugs are not reportable, they just become low severity issues as the mobile OS’s raise the bar security-wise.

• So after you have a thorough “feeling” for the site you need to mentally or physically keep a record of workflows in the application. You need to start asking yourself questions like these:

• Does the page functionality display something to the users? (XSS,Content Spoofing, etc)

• Does the page look like it might need to call on stored data?

• (Injections of all type, Indirect object references, client side storage)

• Does it (or can it) interact with the server file system? (Fileupload vulns, LFI, etc)

• Is it a function worthy of securing? (CSRF, Mixed-mode)

• Is this function a privileged one? (logic flaws, IDORs, priv escalations)++

• Where is input accepted and potentially displayed to the user?

• What endpoints save data?

• Any file upload functionality?

• What type of authentication is used?

Steps to take when approaching a target

1. Check/Verify target’s scope (*

2. Find subdomains of target (Refer Subdomain tools mentioned in the article)

3. Run masscan

4. Check which domains resolve

5. Take Screenshot

6. Do Content Discovery (by bruteforcing the files and directories on a particular domain/subdomain)

Web Tools:


• Recon shouldn’t just be limited to finding assets and outdated stuff. It’s also understanding the app and finding functionality that’s not easily accessible. There needs to be a balance between recon and good old hacking on the application in order to be successful — @NahamSec

Subdomain Enumeration Tools:

  • It is recommended to go through the github links for usage of tools.

Enumerating Domains (Note: wherever you see bigdash( — ) below these are actually two dashes together(- -), medium post converted two dashes together with space with a single big dash)

a. Vertical domain corelation (all the subdomain of a domain) ( → Any subdomain of a particular base domain
b. Horizontal domain corelation ( like,,, → anything that is acquired by Google as entity.

1. Sublist3r


git clone
sudo pip install -r requirements.txt


– To enumerate subdomains of specific domain:

python -d


alias sublist3r=’python /path/to/Sublist3r/ -d ‘

alias sublist3r-one=”. <(cat domains | awk ‘{print “sublist3r “$1 “ -o “ $1 “.txt”}’)”

2. subfinder


go get


subfinder -d

./subfinder -dL hosts.txt

To find domain recursively:

subfinder -d <domain> -recursive -silent -t 200 -v -o <outfile>

For using bruteforcing capabilities, you can use -b flag with -w option to specify a wordlist.

./subfinder -d -b -w jhaddix_all.txt -t 100 — sources censys — set-settings CensysPages=2 -v

The -o command can be used to specify an output file.

3. findomain

You can monitor the subdomains and provide the webhooks to get notifications on Slack and discord.


$ wget
$ chmod +x findomain-linux


findomain -t

4. assetfinder —

– Find domains and subdomains potentially related to a given domain.


go get -u


assetfinder -subs-only <domain>

cat domains | assetfinder -subs-only ( make sure domains file is without http:// or https://)

5. Amass:


go get -u…


amass enum -o subdomains.txt -d output_file.txt


amass enum -o out.txt -df domains.txt

All discovered domains are run through reverse whois(horizontal subdomain enum)

amass intel -whois -d

6. censys-enumeration

– This is the most important steps, because the subdomains names that you find here, you cannot find from other bruteforce tools because your wordlist does not have pattern that are available in all the subdomains or does not have keyword like gateway or payment which are part of subdomain.

search query —

A script to extract subdomains/emails for a given domain using SSL/TLS certificates dataset on Censys


– Clone this repo

$ git clone

– Install dependencies

$ pip install -r requirements.txt

– Get Censys API ID and Censys API secret by creating a account on

– Add Censys API ID and Censys API secret as CENSYS_API_ID & CENSYS_API_SECRET respectively to the OS environment variables. On Linux you can use a command similar to following to do this

$ export CENSYS_API_SECRET=”iySd1n0l2JLnHTMisbFHzxClFuE0"


$ python — no-emails — verbose — outfile results.json domains.txt

7. altdns

– It generates the possible combinations of original domain with the words from the wordlist (example).


pip install py-altdns


# python -i input_domains.txt -o ./output/path -w altdns/words.txt -i subdomains.txt -o data_output -w words.txt -s results_output.txt

8. Massdns:


git clone
cd massdns


./bin/massdns [options] [domainlist]

Resolve all A records from domains within domains.txt using the resolvers within resolvers.txt in lists and store the results within results.txt:

$ ./bin/massdns -r lists/resolvers.txt -t A domains.txt > results.txt

9. domains-from-csp

– Content-Security-Policy header allows us to create a whitelist of sources of trusted content, and instructs the browser to only execute or render resources from those domains(sources).


$ git clone
$ pipenv install


# python target_url
# python target_url — resolve

10. Using SPF record of DNS

– A Python script to parse netblocks & domain names from SPF(Sender Policy Framework) DNS record

– For every parsed asset, the script will also find and print Autonomous System Number(ASN) details


$ git clone
$ pipenv install


– Parse the SPF record for assets but don’t do ASN enumeration

$ python target_url

– Parse the SPF record for assets and do ASN enumeration

$ python target_url — asn

Get ASN Number:

– Autonomous System Number (ASN) -> -> check for example and checkin Prefixes V4 to get the IP range


$ curl -s | jq -r .as

AS36459 GitHub, Inc.

– The ASN numbers found can be used to find netblocks of the domain.

– We can use advanced WHOIS queries to find all the IP ranges that belong to an ASN

$ whois -h — ‘-i origin AS36459’ | grep -Eo “([0–9.]+){4}/[0–9]+” | uniq

There is an Nmap script to find IP ranges that belong to an ASN that

$ nmap — script targets-asn — script-args targets-asn.asn=17012 > paypal.txt

Clean up the output from the above nmap result, take all the IPs in a file and then run version scanning on them or masscan on them.

nmap -p- -sV -iL paypal.txt -oX paypal.xml

– you can use dig

$ dig AXFR @<nameserver> <domain_name>

11. Certspotter

– Good for vertical and horizontal corelation

– you can get domain names, subdomain names

– email address in a certificate

find-cert() {

curl -s$1 | jq -c '.[].dns_names' | grep -o '"[^"\+"';

12. Crt.sh

13. knockpy


$ sudo apt-get install python-dnspython
$ git clone
Set your virustotal API_KEY:
$ nano knockpy/config.json
$ sudo python install


$ knockpy -w wordlist.txt

1. Shodan -

Ports:8443, 8080
Title: “Dashboard[Jenkins]”
Product: Tomcat
Org: google

To find jenkins instance in a target:

org:{org name;x-jenkins:200}

2. (Horizontal Domain Enumeration) Reverse whois lookup — if you know the “email id “ in the registrar of a domain and you want to check what other domains are registered with the same email id you can use this site. Most of the tools does not find Horizontal Domain Enumeration.

Get email address using — $ whois <>

or get the email and input in this website :

I found that this site gives more domains than

Also it has option to export result in CSV.

3. Sublert —

• Sublert is a security and reconnaissance tool which leverages certificate transparency to automatically monitor new subdomains deployed by specific organizations and issued TLS/SSL certificate.


$ git clone && cd sublert
$ sudo pip3 install -r requirements.txt


Let’s add PayPal for instance:

$ python -u

Let’s make executable:

$ chmod u+x

Now, we need to add a new Cron job to schedule execution of Sublert at given time. To do it, type:

$ Crontab -e

Add the following line at the end of the Cron file:

0 */12 * * * cd /root/sublert/ && /usr/bin/python3 -r -l >> /root/sublert/sublert.log 2>&1

Jason Haddix (
The lost art of LINKED target discovery w/ Burp Suite:
1) Turn off passive scanning
2) Set forms auto to submit
3) Set scope to advanced control and use string of target name (not a normal FQDN)
4) Walk+browse, then spider all hosts recursively!
5) Profit (more targets)!

Content Discovery Tools (Directory Bruteforcing)

• Use robots.txt to determine the directories.

• Also spider the host for API endpoints.

• you see an open port on 8443

• Directory brute force

• /admin/ return 403

• You bruteforce for more files/direcotries on /admin/

• and let’s say /admin/users.php return 200

• Repeat on other domain, ports, folders etc

1. ffuf

– A fast web fuzzer written in Go.


go get


ffuf -w /path/to/wordlist -u https://target/FUZZ

Virtual host discovery (without DNS records):

• Start by figuring out the response length of false positive:

$ curl -s -H “” | wc -c

Assuming that the default virtual host response size is 4242 bytes, we can filter out all the responses of that size (-fs 4242)while fuzzing the Host — header:

ffuf -w /path/to/vhost/wordlist -u https://target -H “Host: FUZZ” -fs 4242

GET parameter fuzzing:

GET parameter name fuzzing is very similar to directory discovery, and works by defining the FUZZ keyword as a part of the URL. This also assumes an response size of 4242 bytes for invalid GET parameter name.

ffuf -w /path/to/paramnames.txt -u https://target/script.php?FUZZ=test_value -fs 4242

If the parameter name is known, the values can be fuzzed the same way. This example assumes a wrong parameter value returning HTTP response code 401.

ffuf -w /path/to/values.txt -u https://target/script.php?valid_name=FUZZ -fc 401

POST data fuzzing:

This is a very straightforward operation, again by using the FUZZ keyword. This example is fuzzing only part of the POST request. We’re again filtering out the 401 responses.

ffuf -w /path/to/postdata.txt -X POST -d “username=admin\&password=FUZZ” https://target/login.php -fc 401

1. dirsearch or


git clone
cd dirsearch
python3 -u <URL> -e <EXTENSION>
python3 -e php,txt,zip -u https://target -w db/dicc.txt — recursive -R 2


$ wget\_redirect\_wordlist.txt


alias dirsearch=’python3 /path/to/dirsearch/ -u ‘
alias dirsearch-one=”. <(cat domains | awk ‘{print “dirsearch “$1 “ -e *”}’)”
alias openredirect=”. <(cat domains | awk ‘{print “dirsearch “$1 “ -w /path/to/dirsearch/db/openredirectwordlist.txt -e *”}’)”

2. Dirbuster

3. Gobuster


go get


gobuster dir -u -c ‘session=123456’ -t 50 -w common-files.txt -x .php,.html

4. wfuzz


pip install wfuzz

Usage( :

$ wfuzz -w raft-large-directories.txt — sc 200,403,302

5. Burp Intruder

6. Burp Scanner

Screenshot Tools:

• Look at the headers to see which security options are in place, for example looking for presence of X-XSS-Protection: or X-Frame-Options: deny.

• Knowing what security measures are in place means you know your limitations.

1. Aquatone


go get -u


cat hosts.txt | aquatone -out ~/aquatone/

2. Eyewitness:


$ git clone

Navigate into the setup directory
Run the script


./EyeWitness -f urls.txt — web

./EyeWitness -x urls.xml — timeout 8 — headless

3. Webscreenshot:


$ apt-get update && apt-get install phantomjs
$ pip install webscreenshot


$ python -i list.txt -v

• Once this is done, we use a tool called epg-prep ( to create thumbnails to do so, simply run: epg-prep
This will allow us to view the created pictures using express-photo-gallery.

• In a final step, use the express-gallery-script from the bottom of this blogpost and save it as yourname.js. All you need to do is to change the folder name inside the script: app.use(‘/photos’, Gallery(‘’, options)); the folder name in this case is set but depending on which target you look at it may be different. Once you’ve done that you can simply run the script using node yourname.js. This will create a webserver listening on Port 3000 with an endpoint called /photos. So to access this you simply type: http://yourserverip:3000/photos to get a nice overview of the subdomains you have enumerated

System Tools
apt update && apt upgrade
curl -sL | sudo -E bash -
apt install -y git wget python python-pip phantomjs xvfb screen slurm gem phantomjs imagemagick graphicsmagick nodejs

Requirements for WebScreenshot

pip install webscreenshot
pip install selenium

Requirements for express-photo-gallery

sudo npm install -g npm
npm install express-photo-gallery
npm install express
npm install -g epg-prep

express-photo-gallery Script

var express = require(‘express’);
var app = express();

var Gallery = require(‘express-photo-gallery’);

var options = {
title: ‘My Awesome Photo Gallery’

app.use(‘/photos’, Gallery(‘’, options));


Check CMS

1. Wappalyzer browser extension

2. Builtwith —

3. Retire.js for old JS library

4. Ghostery


Look out for WAFs, you can use WafW00f for that

Popular Google Dorks Use(finding Bug Bounty Websites) responsible disclosure
inurl:index.php?id= bug bounty
“index of” inurl:wp-content/ (Identify Wordpress Website)
inurl:”q=user/password” (for finding drupal cms )

google dorks
google dorks



contentdiscoveryall.txt from jhaddix:

all.txt from jhaddix —

PayloadAllTheThings —

XSS Payloads-
XSS Payloads —
SQL Injection Payloads —
Google-Dorks Payloads —

Extracting vhosts

Web Tool —
Virtual host scanner —

git clone
ruby scan.rb — ip= — host=domain.tld

Port Scan

• Scan each individual IP address associated with their subdomains and having the output saved to a file

• Look for any services running on unusual ports or any service running on default ports which could be vulnerable (FTP, SSH, etc). Look for the version info on services running in order to determine whether anything is outdated and potentially vulnerable

1. Masscan :

This is an Internet-scale port scanner. It can scan the entire Internet in under 6 minutes, transmitting 10 million packets per second, from a single machine.


$ sudo apt-get install git gcc make libpcap-dev
$ git clone
$ cd masscan
$ make -j8

This puts the program in the masscan/bin subdirectory. You’ll have to manually copy it to something like /usr/local/bin if you want to install it elsewhere on the system.


Shell script to run dig

• Because Masscan takes only IPs as input, not DNS names

• Use it to run Masscan against either a name domain or an IP range

strip=$(echo $1|sed ‘s/https\?:\/\///’)
echo “”
echo “##################################################”
host $strip
echo “##################################################”
echo “”
masscan -p1–65535 $(dig +short $strip|grep -oE “\b([0–9]{1,3}\.){3}[0–9]{1,3}\b”|head -1) — max-rate 1000 |& tee $strip_scan

Usage: masscan -p1–65535 -iL $TARGET_LIST — max-rate 10000 -oG $TARGET_OUTPUT
# masscan -p80,8000–8100
# masscan -p80 — banners — source-ip

1. Nmap:

Github For Recon

• Github is extremely helpful in finding Sensitive information regarding the targets. Access-keys, password, open endings, s3 buckets, backup files, etc. can be found on public GitHub repositories.

• Look for below things during a general first assessment(taken from edoverflow):

– API and key. (Get some more endpoints and find API keys.)

– token

– secret


– password

– vulnerable 😜

– http:// & https://

Then I will focus on terms that make me smile when developers mess things up:


– random

– hash

– MD5, SHA-1, SHA-2, etc.


Github Recon Tools

1. gitrob:

– Gitrob is a tool to help find potentially sensitive files pushed to public repositories on Github. Gitrob will clone repositories belonging to a user or organization down to a configurable depth and iterate through the commit history and flag files that match signatures for potentially sensitive files. The findings will be presented through a web interface for easy browsing and analysis.


$ go get


gitrob [options] target [target2] … [targetN]

1. shhgit —

– Shhgit finds secrets and sensitive files across GitHub code and Gists committed in near real time by listening to the GitHub Events API.


$ go get


• To configure it check the github page.

• Unlike other tools, you don’t need to pass any targets with shhgit. Simply run $ shhgit to start watching GitHub commits and find secrets or sensitive files matching the included 120 signatures.

Alternatively, you can forgo the signatures and use shhgit with a search query, e.g. to find all AWS keys you could use

shhgit — search-query AWS_ACCESS_KEY_ID=AKIA

2. Trufflehog:

– Searches through git repositories for high entropy strings and secrets, digging deep into commit history.


pip install truffleHog


$ truffleHog — regex — entropy=False

3. git-all-secrets —

– It clones public/private github repo of an org and user belonging to org and scan them.

– Clones gist belonging to org and users of org.


git clone


docker run — rm -it abhartiya/tools_gitallsecrets — help
docker run -it abhartiya/tools_gitallsecrets -token=<> -org=<>

4. gitGraber

– monitor GitHub to search and find sensitive data in real time for different online services such as: Google, Amazon, Paypal, Github, Mailgun, Facebook, Twitter, Heroku, Stripe.


git clone
cd gitGraber
pip3 install -r requirements.txt


python3 -k wordlists/keywords.txt -q “uber” -s

We recommend creating a cron that will execute the script regulary :

*/15 * * * * cd /BugBounty/gitGraber/ && /usr/bin/python3 -k wordlists/keywords.txt -q “uber” -s >/dev/null 2>&1

Do it manually:

• A quick Google “Gratipay GitHub” should return Gratipay’s org page on GitHub. Then from there I am going to check what repos actually belong to the org and which are forked. You can do this by selecting the Type: dropdown on the right hand side of the page. Set it to Sources.

• Now, I am going to take a look at the different languages that the projects are written in. My favourite language is Python so I might start focusing on Python projects, but for recon I will mostly just keep note of the different languages.

• After that I will start using the GitHub search bar to look for specific keywords.

org:gratipay hmac

• There are 4 main sections to look out for here.

– Repositories is nice for dedicated projects related to the keyword. For example, if the keyword is “password manager”, I might find they are building a password manager.

– Code is the big one. You can search for classic lines of code that cause security vulnerabilities across the whole organization.

– Commits is not usually my favourite area to look at manually, but if I see a low number I might have a quick look.

– Issues this is the second biggest and will help you all with your recon. This is the gold mine.

Companies share so much information about their infrastructure in issue discussions and debates. Look for domains and subdomains in those tickets.

Chris: “Oh, hey John. We forgot to add this certificate to this domain:”


• “” “dev”

• “”

• “” API_key

• “” password

• “” authorization

• others

Read every JS File

Sometimes, Javascript files contain sensitive information including various secrets or hardcoded tokens. It’s always worth to examine JS files manually.
Find following things in Javascript.

• AWS or Other services Access keys

• AWS S3 buckets or other data storage buckets with read/write permissions.

• Open backup sql database endpoints

• Open endpoints of internal services.

JS File Parsing

1. JSParser:


$ git clone

$ apt install libcurl4-openssl-dev libssl-dev

$ pip3 install -r requirements.txt
$ python install


Run and then visit http://localhost:8008.

$ python

1. LinkFinder:

LinkFinder is a python script written to discover endpoints and their parameters in JavaScript files


$ git clone
$ cd LinkFinder
$ pip3 install -r requirements.txt
$ python install


• Most basic usage to find endpoints in an online JavaScript file and output the HTML results to results.html:

python -i -o results.html

• CLI/STDOUT output (doesn’t use jsbeautifier, which makes it very fast):

python -i -o cli

• Analyzing an entire domain and its JS files:

python -i -d

• Burp input (select in target the files you want to save, right click, Save selected items, feed that file as input):

python -i burpfile -b

• Enumerating an entire folder for JavaScript files, while looking for endpoints starting with /api/ and finally saving the results to results.html:

python -i ‘Desktop/*.js’ -r ^/api/ -o results.html

1. getJS

– A tool to fastly get all javascript sources/files


go get


cat domains.txt | getJS |tojson

To feed urls from a file use:

$ getJS -input=domains.txt

2. InputScanner

– A tool designed to scrape a list of URLs and scrape input names (id if no name is found). This tool will also scrape .js urls found on each page (for further testing).


Somewhere to run PHP. Recommended to use LAMP/XAMPP locally so you can just run the PHP from your computer locally. You can grab XAMPP from here:

• Clone in /var/www

git clone


– Now you’re setup, it’s time to gather some URLs to test. Use Burp Spider to crawl.

– Once the spider has finished (or you stop it), right click on the host and click “Copy Urls in this host”.

– Once copied, paste them into urls.txt. Now open payloads.txt and enter some payloads you wish to inject into each parameter (such as xss” xss’ to test for the reflection of “ and ‘ characters on iputs. This will help automate looking for XSS). This script will inject each payload into each parameter.. so the more payloads, the more requests you’ll be sending.

– Now visit and begin scan

– Once the scanner is complete you will be given 4 txt file outputs (see below). Use the BURP Intruder to import your lists and run through them.

– 4 files are outputted in the /outputs/ folder: JS-output.txt, GET-output.txt, POSTHost-output.txt, POSTData-output.txt.

• GET-output.txt is a file which can be easily imported into a BURP intruder attack (using the Spider type). Set the position in the header (GET §val§ HTTP/1.0) and run the attack. Make sure to play with settings and untick “URL-encode these characters”, found on the Payloads tab. Currently the script will echo the HOST url, and I just mass-replace in a text editor such as Sublime. (Replace to null). You are free to modify the script for how you see fit.

• JS-output.xt contains a list of .js urls found on each page. The format is found@||, and this is so you can easily load it into JS-Scan (another tool released by me) and it will let you know where each .js file was found as it scrapes.

• POSTHost-output.txt contains a list of HOST urls (such as which is used for the “Pitchfork” burp intruder attack. Use this file along with POSTData-output.txt. Set attack type to “Pitch fork” and set one position in the header (same as Sniper attack above), and another at the bottom of the request (the post data sent). Make sure to set a Content-Length etc.

• POSTData-output.txt contains post data. (param1=xss”&param2=xss”&param3=xss”)

3. JS-Scan

– A tool designed to scrape a list of .js files and extract urls, as well as juicy information.


1. Install LAMP/XAMPP Server.

2. InputScanner to scrape .js files

3. Clone repo:

git clone


– Import JS-output.txt file in this interface —


• Searching for the targets webpages in waybackmachine, the following things can be found.
Old and abandoned JS files.
Old API endpoints.
Abandoned CDN’s Endpoints.
Abandoned Subdomains.
Dev & staging endpoint with juicy info in source code comments.
If you are getting 403 on a page, you can also search that 403 pages of targets in way back machine sometimes, you will find them open with helpful information.

1. waybackurls —

– Fetch all the URLs that the Wayback Machine knows about for a domain.


go get


cat domains.txt | waybackurls > urls

1. waybackunifier

– WaybackUnifier allows you to take a look at how a file has ever looked by aggregating all versions of this file, and creating a unified version that contains every line that has ever been in it.


go get



-concurrency int
Number of requests to make in parallel (default 1)
-output string
File to save results in (default “output.txt”)
-sub string
list of comma-separated substrings to look for in snapshots (snapshots will only be considered if they contnain one of them) (default “Disallow,disallow”)
-url string
URL to unify versions of (without protocol prefix) (default “”)


Lot of web mindmaps:

S3 Tools

1. SubOver

– A Powerful Subdomain Takeover Tool


go get


./SubOver -l subdomains.txt

1. subjack

– Subjack is a Subdomain Takeover tool written in Go designed to scan a list of subdomains concurrently and identify ones that are able to be hijacked

– Subjack will also check for subdomains attached to domains that don’t exist (NXDOMAIN) and are available to be registered. No need for dig ever again


go get


./subjack -w subdomains.txt -t 100 -timeout 30 -o results.txt -ssl

1. TakeOver-v1

– It gives the CNAME of all the subdomains from a file


git clone


./ subdomain.txt

1. subzy

• Subdomain takeover tool which works based on matching response fingerprings from can-i-take-over-xyz


go get -u -v
go install -v


List of subdomains

./subzy -targets list.txt
Single or few subdomains

./subzy -target
./subzy -target,

Other/Interesting Tools

1. Parameth

– This tool can be used to brute discover GET and POST parameters

– Often when you are busting a directory for common files, you can identify scripts (for example test.php) that look like they need to be passed an unknown parameter. This hopefully can help find them.


git clone
virtualenv venv
. ./venv/bin/activate
pip install -u -r requirements.txt


./ -u

2. Arjun

– HTTP parameter discovery suite.



– Scanning a single URL

To find GET parameters, you can simply do:

python3 -u — get

Similarly, use — post for POST and — json to look for JSON parameters.

– Scanning multiple URLs

A list of URLs stored in a file can be test by using the — urls option as follows

python3 — urls targets.txt — get

3. fuxploitder

– File upload vulnerability scanner and exploitation tool.


git clone
cd fuxploider
pip3 install -r requirements.txt


To get a list of basic options and switches use :

python3 -h

Basic example :

python3 — url — not-regex “wrong file type”

4. Syborg —

– Recursive DNS Subdomain Enumerator with dead-end avoidance system


Clone the repo using the git clone command as follows:

git clone

Resolve the Dependencies:

pip3 install -r requirements.txt



At times, it is also possible that Syborg will hit High CPU Usage and that can cost you a lot if you are trying to use this tool on your VPS. Therefore to limit that use another utility called Cpulimit

cpulimit -l 50 -p $(pgrep python3)

This tool can be downloaded as follows:

sudo apt install cpulimit

5. dnsgen

– Generates combination of domain names from the provided input.

– Combinations are created based on wordlist. Custom words are extracted per execution.


pip3 install dnsgen

..or from GitHub:

git clone
cd dnsgen
pip3 install -r requirements.txt
python3 install


$ dnsgen domains.txt (domains.txt contains a list of active domain names)

Combination with massdns:

$ cat domains.txt | dnsgen — | massdns -r /path/to/resolvers.txt -t A -o J — flush 2>/dev/null

6. SSRFmap

– Automatic SSRF fuzzer and exploitation tool

– SSRFmap takes a Burp request file as input and a parameter to fuzz.


$ git clone
$ cd SSRFmap/
$ pip3 install -r requirements.txt


First you need a request with a parameter to fuzz, Burp requests works well with SSRFmap. They should look like the following. More examples are available in the /data folder.

POST /ssrf HTTP/1.1
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://mysimple.ssrf/
Content-Type: application/x-www-form-urlencoded
Content-Length: 31
Connection: close
Upgrade-Insecure-Requests: 1

Use the -m followed by module name (separated by a , if you want to launch several modules).

# Launch a portscan on localhost and read default files
python -r data/request.txt -p url -m readfiles,portscan


– Dead simple wildcard DNS for any IP Address

Stop editing your etc/hosts file with custom hostname and IP address mappings. allows you to do that by mapping any IP Address to a hostname using the following formats:

– maps to

– 192–168–1– maps to

8. CORS Scanner

– Fast CORS misconfiguration vulnerabilities scanner


git clone

– Install dependencies

sudo pip install -r requirements.txt


– To check CORS misconfigurations of specific domain:

python -u

– To check CORS misconfigurations of specific URL:

python -u

– To check CORS misconfiguration with specific headers:

python -u -d “Cookie: test”

– To check CORS misconfigurations of multiple domains/URLs:

python -i top_100_domains.txt -t 100

9. Blazy

– Blazy is a modern login bruteforcer which also tests for CSRF, Clickjacking, Cloudflare and WAF .


git clone
cd Blazy
pip install -r requirements.txt



10. XSStrike

– Most advanced XSS scanner.


git clone


Scan a single URL
Option: -u or — url

Test a single webpage which uses GET method.

python -u “"

Supplying POST data
python -u “" — data “q=query”

11. Commix —

– Automated All-in-One OS command injection and exploitation tool.


git clone commix


# python — url=”" — data=”ip=” — cookie=”security=medium; PHPSESSID=nq30op434117mo7o2oe5bl7is4"

12. Bolt

– A dumb CSRF scanner


git clone


Scanning a website for CSRF using Bolt is as easy as doing

python3 -u -l 2

1. bass

– Bass grabs you those “extra resolvers” you are missing out on when performing Active DNS
enumeration. Add anywhere from 100–6k resolvers to your “resolver.txt”


git clone
cd bass
pip3 install -r requirements.txt


python3 -d -o output/file/for/final_resolver_list.txt

1. meg —

• meg is a tool for fetching lots of URLs but still being ‘nice’ to servers.

It can be used to fetch many paths for many hosts; fetching one path for all hosts before moving on to the next path and repeating.


go get -u


Given a file full of paths:


And a file full of hosts (with a protocol):

meg will request each path for every host:

▶ meg — verbose paths hosts

meg <endpoint> <host>
$ meg /

The latter command requests the top-level directory for ( It is important to note, that protocols most be specified; meg does not automatically prefix hosts. If you happen to have a list of targets without protocols, make sure to sed the file and add the correct protocol.

$ sed ‘s#^#http://#g' list-of-hosts > output

By default meg stores all output in an out/ directory, but if you would like to include a dedicated output directory, all it takes is appending the output directory to your command as follows:

$ meg / out-edoverflow/

Say we want to pinpoint specific files that could either assist us further while targeting a platform or be an actual security issue in itself if exposed to the public, all it takes is a list of endpoints (lists/php) and a series of targets (targets-all). For this process, storing all pages that return a “200 OK” status code will help us sieve out most of the noise and false-positives (-s 200).

$ meg -s 200 \
lists/php targets-all \
out-php/ 2> /dev/null

2. tojson

• Turn lines of stdin into JSON.


go get -u


getJS -url= | tojson
ls -l | tojson

3. interlace

• Easily turn single threaded command line applications into a fast, multi-threaded application with CIDR and glob support.


$ git clone
$ python3 install


Let’s say we need to run Nikto (a basic, free web server vulnerability scanner) over a list of hosts:

luke$ cat targets.txt

$ interlace -tL ./targets.txt -threads 5 -c “nikto — host _target_ > ./_target_-nikto.txt” -v

==============================================Interlace v1.2 by Michael Skelton (@codingo_)==============================================[13:06:16] [VERBOSE] [nikto — host > ./] Added after processing[13:06:16] [VERBOSE] [nikto — host > ./] Added after processing[13:06:16] [VERBOSE] [nikto — host > ./] Added after processing[13:06:16] [VERBOSE] [nikto — host > ./] Added after processing[13:06:16] [THREAD] [nikto — host > ./] Added to Queue

Let’s break this down a bit — here’s the command I ran:

interlace -tL ./targets.txt -threads 5 -c “nikto — host _target_ > ./_target_-nikto.txt” -v

– interlace is the name of the tool.

– -tL ./targets.txt defines a file with a list of hosts.

– -threads 5 defines the number of threads.

– -c should be immediately followed by the command you want to run.

– “nikto — host _target_ > ./_target_-nikto.txt” is the actual command which will be run, note that instances of _target_ will be replaced with each line in the ./targets.txt file.

– -v makes it verbose.

Good Articles to Read

1. Subdomain Takeover by Patrik —,

2. Subdomain Enumeration —,

3. Can-I-take-over-xyz —

4. File Upload XSS —

5. Serverless Toolkit for Pentesters —

6. Docker for Pentesters —

7. For coding —

8. Android Security Lab Setup —

9. SSL Bypass —

10. Bypass Certificate Pinning —

11. Burp macros and Session handling —

12. Burp Extensions —

13. JSON CSRF to form data attack —

14. meg —

15. assetnote: Push notifications for the new domain

16. interlace :

17. http-desync-attacks-request-smuggling-reborn-


• the art of subdomain enumeration —

• Setup Bug Bounty tools :

• ReconPi —

• TotalRecon — Insalls all the tools —

• Auto Recon Bash Script —

Recon My Way

1. Do Subdomain enumeration using amass, assetfinder, subfind

amass enum — passive -d <DOMAIN>

assetfinder — subs-only <domain>

subfinder -d

subfinder -d <domain> -recursive -silent -t 200 -v -o <outfile>

2. Use commonspeak2 wordlist to get probable permutations of above subdomains.

To generate the possibilities, you can use this simple Python snippet:

scope = ‘<DOMAIN>’
wordlist = open(‘./commonspeak2.txt’).read().split(‘\n’)

for word in wordlist:
if not word.strip():
print(‘{}.{}\n’.format(word.strip(), scope))

3. Use massdns to resolve all the above domains:

./bin/massdns -r lists/resolvers.txt -t A domains.txt > results.txt

4. To get the best resolvers.txt use bass tool:

python3 -d -o output/file/for/final_resolver_list.txt

5. Use dnsgen to generates combination of domain names from the provided input.

cat domains.txt | dnsgen — | massdns -r /path/to/resolvers.txt -t A -o J — flush 2>/dev/null

6. Port Scan Using masscan and nmap for version scan

Shell script to run dig

– Because Masscan takes only IPs as input, not DNS names

– Use it to run Masscan against either a name domain or an IP range

strip=$(echo $1|sed ‘s/https\?:\/\///’)
echo “”
echo “##################################################”
host $strip
echo “##################################################”
echo “”
masscan -p1–65535 $(dig +short $strip|grep -oE “\b([0–9]{1,3}\.){3}[0–9]{1,3}\b”|head -1) — max-rate 1000 |& tee $strip_scan


nmap -iL list.txt -Pn -n -sn -oG output.txt
masscan -iL output.txt -p 0–65535 — max-rate 1000

or run massscan + nmap using below script


nmap scan with output in an nice xml file !
$ sudo nmap -sS -T4 -sC -oA myreportname — stylesheet -iL subdomain.txt

7. Do github recon

8. Take screenshot using aquatone.

cat hosts.txt | aquatone -out ~/aquatone/

9. Run ffuz or gobuster to directory bruteforce/content discovery on a particular domain/subdomain

ffuf -w /path/to/wordlist -u https://target/FUZZ

10. Read JS file, get the endpoints, check if there is any secret token/key in JS files.

11. Use waybackurls to get old JS files, and 403 files.

– Generate wordlist using wayback

# curl -s “*&output=text&fl=original&collapse=urlkey" | sed ‘s/\//\n/g’ | sort -u | grep -v ‘svg\|.png\|.img\|.ttf\|http:\|:\|.eot\|woff\|ico\|css\|bootstrap\|wordpress\|.jpg\|.jpeg’ > wordlist


# curl -L | bash
# curl -s “*&output=text&fl=original&collapse=urlkey" | sed ‘s/\//\n/g’ | sort -u | grep -v ‘svg\|.png\|.img\|.ttf\|http:\|:\|.eot\|woff\|ico\|css\|bootstrap\|wordpress\|.jpg\|.jpeg’ | perl -pe ‘s/\%(\w\w)/chr hex $1/ge’ > wordlist.txt

12. One liner to import whole list of subdomains into Burp suite for automated scanning!

cat <file-name> | parallel -j 200 curl -L -o /dev/null {} -x -k -s


Written by

Bug Hunter, Linux Security Engineer

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade