Day 14: WebPwn (Automate Web Hacking)- Part 1

My background is web-app hacking, I used to love it, then it became to damn easy, you can always hack web apps, one way or another even if it’s vanilla stuff like clickjacking, just look at all the bug bounty reports, the web is broken! But that’s ok because we want to PWN it anyway. But we aint got time for dat so we need to build a bot that does the work for us.

What we will build

  • Web scraper to get all links from page
  • Filter out unique ones
  • Pass unique urls to processing engine
  • Look for common issues like Sqli, XSS, RCE etc and also known CVEs then report if found
  • If issue found, generate report and send email
  • Tweet that we found another web bug
  • Rinse and Repeat

I used one small part of this engine and found over 250,000 vulnerabilities in one month just by running the script.

To build this we are going to have to add some complex parts, well complex for beginners and people not familiar with python so we will take it slow. Today we will crudely scrape a site for urls and print them to console, what we have today is small and crude but still highly effective, run it against a target and look for url parameters etc that you can fuzz/inject — have some fun!

The Link Crawler Code

We will use mechanise to script the browser, it’s very easy to work with and if you saw the Twitter OSINT post we used it for enum.

#Create new browser instance
br = mechanize.Browser()
#Add headers, later we will fuzz here too but for now just use vanilla agent that will get past lame WAFs/filters
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36 OPR/38.0.2220.41')]
#Get the response object
response = br.open(link)
#Now loop through all links found
for link in br.links():

Next we loop through the links found and do the same again to get their links…

linkBrowser = mechanize.Browser()
linkBrowser.addheaders = [('User-agent', 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36 OPR/38.0.2220.41')]
try:
linkBrowser.open(link.url)
for lf in linkBrowser.links():
print lf.url
except:
pass

Start the crawler using first argument supplied…

crawler(sys.argv[1])

Final Code

import sys
import mechanize
def crawler(link):
br = mechanize.Browser()
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36 OPR/38.0.2220.41')]
response = br.open(link)
for link in br.links():
linkBrowser = mechanize.Browser()
linkBrowser.addheaders = [('User-agent', 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.106 Safari/537.36 OPR/38.0.2220.41')]
try:
linkBrowser.open(link.url)
for lf in linkBrowser.links():
print lf.url
except:
pass
crawler(sys.argv[1])

In the next part of this series we will start to filter only unique ones and loop over results to identify and extract payload entry points, we will also try sending some easy XSS/Sqli payloads. We will also change user agent each request, helps when scanning many targets/thousands of endpoints.