Containerizing my NSM stack — Docker, Suricata and ELK

Everything goes in tiny little containers and works like building blocks!

What’s the motivation?

Previously, I had done a little bit of work on getting an NSM stack up and running for a home network. A common bit of feedback on this documentation was that more automation would be helpful in getting a working version of the project up and running.

Also, when I wanted to make changes to the way my NSM stack worked, I had to tear everything down and start over. Since my IDS isn’t virtualized (it’s a server made of old hardware), rebuilding required more work than I had liked. I initially considered using something like Ansible or Chef, but I wanted the work to be more portable and easy to mutate. I then realized this would be the perfect project for me to take a deeper dive into Docker.

Project Summary: TL;DR

Overall, containerization of my NSM platform was successful. Everything is running in a Docker container. There’s a container for Suricata that does all of the network traffic monitoring and logging. It ships the logs to Logstash via Filebeat. There’s containers for Elasticsearch and Kibana, but those are pretty vanilla and worked right out of the box. Lastly, there’s a container for Logstash that has some configuration for parsing Suricata’s eve.json logs and ships them off to Elasticsearch.

In the future, I’ll be adding more features to the NSM stack including:

  • Putting it all together into a single application using Docker Compose.
  • More configuration and tooling around Suricata rule management.
  • More configuration tuning for Suricata and the ELK stack
  • Adding Bro as another source of data and monitoring.
  • Automatically configuring Kibana with Dashboards and Visualizations.
  • Hopefully much more, since this is quite a fun project!

Pre-Docker Containers: OS Configuration

This is the configuration that has to be done before setting up the Docker containers. These steps are for an Ubuntu 16.04 LTS OS. The hardware configuration is pretty well covered on my Git Wiki here.

From there, we’ve got to set up the network interface for traffic sniffing. Identify the sniffing interface and run these commands:

ip link set <INTERFACE> multicast off
ip link set <INTERFACE> promisc on
ip link set <INTERFACE> up

This allows us to disable multicast traffic on the interface, turn on promiscuous mode for traffic sniffing, and enable the interface. You can test whether this configuration was successful by using tcpdump and validating that something other than ARP traffic is being seen. For example, browse to an HTTP-only site while running tcpdump:

tcpdump -i $INTERFACE port 80

After that, we’ll install Docker by following the instructions on their site here. Finally, we’ll get all of the required Docker images that we’ll be using either as a base image or out of the box.

docker pull ubuntu
docker pull docker.elastic.co/elasticsearch/elasticsearch:6.1.1
docker pull docker.elastic.co/kibana/kibana:6.1.1
docker pull docker.elastic.co/logstash/logstash:6.1.1

As a quick note, I’m just barely getting into my Docker and containerization adventures, so I could be doing things against best practices. My idea around this project is to get familiar with Docker and how I could containerize more security services. I’m always happy for feedback and improvements and I’ll share what I learn.

Docker Containers: Elasticsearch and Kibana

This section is pretty straight forward. Using both Elasticsearch and Kibana out of the box is simple. We’ve already downloaded the Docker container images for both Elasticsearch and Kibana, so now we just have to launch them.

For Elasticsearch, we just launch the container exposing the service ports, and setting the Elasticsearch environment variables to single-node. We’ve set the hostname and container name to elastic and the network to “host” so that all of our containers can communicate with the Suricata container (which is also set to host). We’re also mounting a Docker volume so that Elastic data persists between container shut downs. In the future, I’d like to make some changes so that only the Suricata container has to be on the host network, but more on that later.

For reference, more information can be found here.

docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" --hostname=elastic --name=elastic --network=host -t --mount source=elastic,destination=/usr/share/elasticsearch/data docker.elastic.co/elasticsearch/elasticsearch:6.1.1

For Kibana, the only thing we have to do is launch the default container with a few flags as well. We’re setting the Elasticsearch URL to localhost (since everything is on the host network, and we’re exposing the default Kibana port.

For reference, more information can be found here.

docker run -e ELASTICSEARCH_URL="http://localhost:9200" --hostname=kibana --name=kibana --network=host -p 5601:5601 -t docker.elastic.co/kibana/kibana:6.1.1

Docker Containers: Logstash

Our Logstash container has a bit of customization to it. We have a Dockerfile that helps us build the container with the configuration changes. The Dockerfile can be found on Github here. This Dockerfile creates a Docker image while also removing the default Logstash configuration and replacing it with the one found here. This allows us to parse and format all of the Suricata eve.json logs sent via Filebeats before shipping them to Elasticsearch.

To build the Logstash image using the Dockerfile, change to the logstash directory and run this command:

docker build -t logstash .

To launch the Logstash container, run this command. Everything is pretty much the same as previous flags, except some specific Logstash environment variables for monitoring.

docker run --hostname=logstash --name=logstash --network="host" -e "xpack.monitoring.elasticsearch.url=http://localhost:9200" -t logstash

Docker Containers: Suricata

Our last bit of the puzzle is the Suricata container and all the logging magic that it does. This container is quite a bit more customized and is built off the Ubuntu Docker container. The Dockerfile can be found here. The inline comments cover most of what’s going on, but the summary is that we’re installing both Suricata and Filebeat and then moving the configurations to their proper locations. At the very end, we tell the container to run the service start commands for both Suricata and Filebeat, then tail the Suricata log. The tail command keeps the container running.

To build the Suricata docker container change directory to the Suricata docker directory and run this command:

docker build -t suricata .

Once the container is built, launch it with this command:

docker run --network=host --hostname=suricata --name=suricata -it suricata

The reason we’re running all of the Docker containers on the host network is because the Suricata container needs access to the host’s sniffer interface. This is where my Docker learning has come a bit short, as I’ve found some information about doing a docker run command with multiple networks, but I couldn’t get it working in testing. So instead of adding my Docker containers to more networks after they’ve been created, I took a bit of a short cut. I’d like to fix this in the future though.

All together now!

With that completed, all of containers are launched and we’re ready to see how everything is going.

All the docker containers are running!
Elasticsearch is receiving data, and Kibana can visualize what’s being stored!

Future improvements to this project:

  • Kibana configuration, visualizations, and searching. Useful visualizations are nice to have, and it’s helpful to have examples to build your own visualizations on.
  • Adding Bro to the IDS stack. Bro gives a ton of valuable analysis and helpful logs, so it’d be perfect for this NSM.
  • Suricata configuration deep dive and performance tuning. Suricata is quite a beast, and could use quite a bit of tuning depending on the network throughput. I honestly haven’t used Suricata in a container setup long enough to know what might need to be changed to get more performance.
  • Improved Suricata rule management. Currently we’re not updating any rules without tearing down the container and rebuilding it again. This isn’t an ideal situation, especially when you want rolling, fast deploys of new IDS rulesets. This should be fairly easy to port the script from previous work done with the grIDS NSM.