3-Step RDP Honeypot: Step 2 | Operationalize PCAPs

Chapin Bryce
Feb 15 · 5 min read

If you are joining the series here, please consider starting with the Introduction and Part 1, as they cover the objectives and setting up pre-requisites for this phase of the honeypot design.

PCAP data is valuable — providing insight into all network communications across a link and allowing for a deep dive on activity across the wire. When something is detailed, it is also verbose; this leads to many cases where due to storage requirements PCAP data is not feasible. Our RDP Honeypot is not immune to this, we have limited the amount we collect in Part 1 of this series to 500 MB per hour, and even then we are prone to filling up our storage capacity in a short time. We have mitigated that through only collecting 3389/TCP traffic, though over time we will have to start considering how long we wish to retain full PCAP data.

Photo by Dennis Buchner on Unsplash

One manner to mitigate this, while also automating the extraction of key artifacts, is to process and index our PCAPs with Moloch. Moloch is a free resource developed by the AOL team that allows for us to feed it raw PCAPs and create an index of the most important features for searching and analysis.

This tool provides a lot of value — and has a lot more power than we will leverage in this series.

In short, Moloch will read our PCAP data, index the key features (such as IP addresses, usernames, traffic protocols, and more) into an Elasticsearch instance, and provide a user-friendly interface to explore our volume of PCAP data.

Prep work


  • Linux machine to host processing services. We can use Windows, but this guide will leverage Linux to eliminate barriers to participate.
  • This Linux machine should have at least 2 processing cores and 4 GB of RAM to handle this load, though more is recommended.
  • For storage we will want at least 20 GB of available disk space, though more will be needed depending on the amount of PCAP data retention desired.

Elasticsearch Setup

A quick summary of the steps:

  1. Download the installer package and dependencies (primarily Java)
  2. Edit the elasticsearch.yml to reflect any settings you would like to employ (no changes are required)
  3. Start elasticsearch!

If you want, you can edit the configuration file to prepare the instance for use in a cluster and set up 2 or more additional nodes to balance the index data.

Moloch Setup

The current installation steps include:

  1. Download and install the respective moloch package
  2. Edit the Moloch configuration file, /data/moloch/etc/config.ini
  3. Configure Maxmind integration for GeoIP and ASN resolutions, https://molo.ch/faq#maxmind
  4. Run the configuration script, /data/moloch/bin/Configure
  5. Initialize Elasticsearch for Moloch: /data/moloch/db/db.pl http://ESHOST:9200 init, where ESHOST is the IP address or hostname of your Elasticsearch service.
  6. Add an administrative account, /data/moloch/bin/moloch_add_user.sh admin "Admin User" DONTUSETHISPASSWORD --admin
  7. Start the Moloch service!

At this point, you should be able to access Moloch on port 8005 in your browser.

Connecting the Moving Parts

This can be automated with this shell script. Just generate an SSH key to use for connecting to the honeypot and update the <USER> and <IP> values. You may also need to update the location of the PCAP data based on your honeypot tcpdump configuration:

#!/bin/bash## Please insert the appropriate User and IP address values
## You may also need to edit the path to where your PCAPs
## exist on the remote system
echo ===== Pulling remote pcap data
rsync -a <USER>@<IP>:/data/rdp.*.pcap* /data/pcap/
echo rsync exit code: $?
echo ===== Processing new pcaps
echo indexer exit code: $?

This short script also calls to another script to handle the parsing of PCAPs in the output folder, a full copy of that script is available below:

We will want to set up our rsync and moloch-capture script to run with some frequency to pull PCAP data down and process it. Since we are capturing new PCAPs on an hourly basis, we could set this sync to occur hourly.

Moloch with RDP PCAP data processed over time.
Moloch with RDP PCAP data processed over time.
Moloch with RDP PCAP data processed over time.

Concluding Thoughts

Pythonic Forensics

DFIR Development and Discussion

Chapin Bryce

Written by

DFIR professional, skier, aviation nerd. Co-author of Learning Python for Forensics & Python Forensics Cookbook. Message me for free links to any of my articles

Pythonic Forensics

DFIR Development and Discussion

More From Medium

More from Pythonic Forensics

More from Pythonic Forensics

Fuzzy Hashing and CTPH

More on Dfir from Pythonic Forensics

More on Dfir from Pythonic Forensics

3-Step RDP Honeypot: Step 3 | Build the Bot

More on Dfir from Pythonic Forensics

More on Dfir from Pythonic Forensics

3-Step RDP Honeypot: Step 1 | Honeypot Setup

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade