Collecting Payloads From CTF PCAPs

Px Mx
6 min readApr 29, 2016

--

Collecting Payloads from CTF PCAPs

In my last post, Collecting XSS Subreddit Payloads, I touched on the usefulness of collecting payloads from various sources. Another good source of interesting payloads comes from packet capture (PCAP) files from Capture the Flag (CTF) events. In a CTF event there can be participants of different skill and experience levels competing to solve numerous challenges. So the quality of the payloads can range from plain noise to a well crafted exploit. For the purposes of this post the focus is collecting only the HTTP GET requests from CTF PCAP files. The results of this are now included in my payloads project on Github (https://github.com/foospidy/payloads).

Collecting these request payloads involved a few basic steps. The first being finding CTF PCAP files! The second was parsing all the network data from PCAP files to pull out the HTTP requests, and getting rid of any duplicate data. Finally, automation with simple scripts to make this a repeatable process. It was a fun exercise, and turns out to be relatively easy to do, so I want to use this post to explain how it can be done.

Finding PCAPs

Of course starting with Google, I used the simple terms “ctf pcap”. In fact, let me Google that for you :-). In the results there are several references to PCAP files that are used in challenges, but that’s not what I was looking for. However, within the first five results or so there was one great site (http://www.netresec.com/?page=PcapFiles) that had a list of PCAP files from several CTF events. Perfect for getting started. Between the CTF events there were a large number of PCAP files to download, too many to download manually so a script was needed for that. And other PCAPs could be retrieved using a bit torrent client.

So far I’ve only retrieved PCAPS from this one site. If you are aware of other resources hosting CTF PCAP files please share by posting a comment. Or even better, use the steps I outline below and contribute to the payloads project.

Parsing PCAPs

If you’ve ever looked at a PCAP file with a tool like Wireshark, initially it can be intimidating. Moreover, just thinking about how to parse through all the network traffic to extract a specific part of all HTTP requests sounds a bit more daunting. I’m sure there are many tools and libraries available to help take on such a task, but fortunately I thought of one approach that would not require research or custom coding. I figured I’d call upon our good Bro… bro. If you haven’t heard of The Bro Network Security Monitor before, you should absolutely check it out. It is an extremely powerful network monitoring platform, but can also be used as a tool to parse PCAP files by replaying them through the Bro engine. Given the capability Bro has, I was able to easily create a script with a few commands to parse out all HTTP traffic from the PCAP file, and then extract the URI field. Exactly what I was after, thanks Bro!

How I used Bro

Even if you don’t want to use it as a full network monitoring solution, Bro can be very handy in dealing with PCAP files. Extracting the information I wanted was as easy as these two commands:

1. Using the bro command with the -r argument tells Bro to read network traffic from a file. When Bro processes the PCAP file it will parse all network protocols that it knows about and output the traffic details to a log file for each protocol (e.g. http.log, ftp.log, dns.log, smtp.log, etc.). The file I needed was of course the http.log file. Example of running this command:

/opt/bro/bin/bro -r ctf_file.pcap

2. Next, the bro-cut command is used to print specific columns from any of the log files. To see what columns are available for any particular log file just view the beginning of that file. All I needed was the URI column from the http.log file. And rather than just printing to standard out I redirected the output to another file. Example of running this command:

/opt/bro/bin/bro-cut uri < http.log > hxxp.log

That’s all, just those two commands and I’ve extracted all HTTP GET requests from the PCAP file. I should point out that this actually includes any paths and query strings from HTTP POST requests as well, but not the POST data itself. Now I just needed to trim down the output by eliminating all the duplicate request. This can be easily accomplished by adding the sort command with the -u argument to the mix. Example of running this command:

/opt/bro/bin/bro-cut uri < http.log | sort -u > hxxp.log

Automation

There were a lot of PCAP files to download so it’s scripts to the rescue. For the files that can be downloaded from the web site I used the scripts below. The scripts do require a url.txt file that contains all the URLs.

example url.txt file

Use this script when the PCAP files are not compressed.

#!/bin/bashfor url in `cat url.txt` # loop through each line in url.txt
do
wget $url -O thefile.pcap # download the pcap file
/opt/bro/bin/bro -r thefile.pcap # parse pcap file with bro
cat http.log >> hxxp.log # append http data to hxxp.log
rm thefile.pcap # delete the pcap file
done
# output uri column to sort and then save to hxxp.sortu.log
/opt/bro/bin/bro-cut uri < hxxp.log | sort -u > hxxp.sortu.log

Use this script when the PCAP files are compressed. Obviously, modify the script if something other than gzip was used (e.g. bzip2 or zip).

#!/bin/bashfor url in `cat url.txt` # loop through each line in url.txt
do
wget $url -O thefile.pcap.gz # download the gzip file
gunzip thefile.pcap.gz # uncompress the gzip file
/opt/bro/bin/bro -r thefile.pcap # parse pcap file with bro
cat http.log >> hxxp.log # append http data to hxxp.log
rm thefile.pcap # delete the pcap file
done
# output uri column to sort and then save to hxxp.sortu.log
/opt/bro/bin/bro-cut uri < hxxp.log | sort -u > hxxp.sortu.log

For torrent files, once extracted use the following script.

#!/bin/bashfor f in `ls *.pcap` # loop through each pcap file in the directory
do
/opt/bro/bin/bro -r $f # parse pcap file with bro
cat http.log >> hxxp.log # append http data to hxxp.log
done
# output uri column to sort and then save to hxxp.sortu.log
/opt/bro/bin/bro-cut uri < hxxp.log | sort -u > hxxp.sortu.log

Once the scripts complete running the hxxp.sortu.log file will look something like this.

example hxxp.sortu.log file

Note, in the above screen shot I selected a section of the file that has some interesting payloads. However, the file will contain a ton of request strings for random things.

Conclusion

This was another fun exercise in collecting payloads. While most of this CTF traffic is likely benign, I think it is still useful to have as part of the payload project. The more junk you can through at a web application the more likely you can generate application errors, and hopefully stumble onto a bug worth investigating further. I guess you can say this approach is a kind of “dumb fuzzing”. Alternatively, combing through these CTF requests could result in finding few payload gems that can be used in a more targeted fashion.

If you’ve made it to the end of this post I hope you found this to be helpful, or at least interesting. Feedback and comments are welcome. Thanks bro!

--

--