Honeypots are a great way to start building up your own threat intelligence feeds. They can provide a wealth of information from a Threat Intelligence point of view and can also act as an early warning system in case of a breach.
There are plenty of honeypot projects already out there on Github etc. but I wanted to create my own, primarily for the learning experience but also as most of the ones out there didn’t really fit what I wanted to achieve.
I wanted to be able to mimic various services and capture both the credentials being used to brute force access, and also to let the attackers in to see what commands they were running when successful access was given to them.
I’ve created a script to deploy my honeypot and it’s available on my Github https://github.com/DeathsPirate/aws-ec2-docker-ssh-honeypot
The goal of this article is to explain the workings of the honeypot but I’ll also be releasing ‘Part 2’ which will show how to create a nice dashboard in AWS CloudWatch for monitoring and alerting.
I spent some time looking at various options for a nice SSH honeypot that I could run and capture the following:
- Usernames and Passwords being used to Bruteforce in
- Allow valid logins to numerous accounts and capture commands as they are run
To be on the safe side I decided that the best way would be to use Docker and xinetd. This would allow each attacking IP to have their own container spun up. There’s a good article on this here: https://www.itinsight.hu/blog/posts/2015-05-04-creating-honeypots-using-docker.html
Now, I wanted to allow attackers to log in as
root but allowing that with a normal Docker installation is dangerous, really dangerous, it will compromise the host! DON'T DO IT! To get around this we have to enable Docker User Namespaces, this basically allows root on a container to be mapped to another user on the host (ie not root on the host!). With that sorted and knowing we can block connections on the container using IP tables, we have a fairly safe honeypot we can deploy.
So, we’ve got our honeypot but how do we capture the data? There are going to be two things we need to get at:
- The failed attempts at SSH (With access to the actual password used and username)
- The commands run on a container after a successful login
In order to solve the first, we need to get at the passwords tried against the ssh service, to do that we will use PAM (Pluggable Authentication Module). There is a great blog post here on what we are trying to accomplish and also a handy Python script: http://www.chokepoint.net/2014/01/more-fun-with-pam-python-failed.html
I modified the script slightly to remove the reliance on syslog and also output json so we’d have escaped text.
To solve the second part I decided on using Sysdig, it allows you to monitor pretty much anything happening on a container. It really is a cool tool and, if you haven’t already, you should definitely check it out: https://sysdig.com
Now, Sysdig has a concept of Chisels which are basically modules to filter out and display information in a friendly manner. I looked at a few of the ones provided, namely:
spy-users(allows us to get which commands were passed to execve)
stdin(gives a char by char output of what's passed to stdin)
spy-logs(monitors log files for changes)
I had to modify these Chisels slightly to allow JSON output (I wanted to escape input and have a friendly output for my logging) and also for the stdin chisel I actually wanted the whole line, not just individual chars (it also takes into account backspace and delete etc to show the correct inputted line).
So with the above in place we can spin up containers, capture the information we are interested in and log it. Great, but not quite there yet. For the containers what will happen is that the SSH server will disconnect after three failed password attempts, so I want to stop and remove the container after that (no point keeping it), whilst if someone was successful in logging in I want to stop the container after they disconnect (or after 5 minutes, whichever comes first), export the container to an S3 bucket for future analysis and then remove the container. This is all handled by the Python monitor script.
systemd to run Sysdig and the python monitoring script as services so they will start at boot and persist after logging out.
The Honeypot host creates three log files we are interested in:
/var/log/docker_start.log(Adds a log event when a Docker container is started)
/var/log/failed_attempts.log(Adds a log event for every failed ssh attempt)
/var/log/commands.log(Adds a log event for each command run in a container)
To get the logs off the EC2 instance and up into CloudWatch Logs we can use the AWS CloudWatch Logs Agent.
Last bit is to open it up to attackers and start getting data.
I’ve created a handy walk-through for accomplishing the above and it’s available via my Github here https://github.com/DeathsPirate/aws-ec2-docker-ssh-honeypot/blob/master/README.md