Running Snort In Kubernetes — Part 1: Architectural Overview

Raj Kuni
Posh Engineering
Published in
5 min readMar 25, 2021

This blog post is part of a series of blog posts that will cover how I containerized Snort and got it running in a Kubernetes cluster to monitor incoming and outgoing traffic. It is meant to be more of a diary of what I did to get to the goal. It is not meant to be an official guide on getting Snort running in a production Kubernetes cluster! But I hope that you, as a reader, will gain the same insight and knowledge I got when going through this project.

I am assuming that you have some familiarity with Kubernetes and its associated concepts. I will also avoid going into too much detail on how Snort works as there is already plenty of documentation for that.

In this post we will go over what Snort is, how it is used, and give an architectural overview of where we want to be (or where I got to) after everything is said and done.

Note that I deployed Snort 2.9.15.1.

My Environment Details

Cloud Provider: Google Cloud Platform (GCP)

Cluster Service: Google Kubernetes Engine (GKE)

Database Service: Google CloudSQL

What is Snort?

Snort is a network inspection tool that can work in three different modes:

  • Sniffer Mode: Snort reads packets off of the network and displays them in a constant stream.
  • Packet Logger Mode: reads the packets and stores them on disk.
  • Network Intrusion Detection System (NIDS) Mode: Detects and analyzes network traffic.

This document focuses on setting up Snort in NIDS mode. In NIDS mode, Snort will analyze each packet using a set of rules to decide whether an action should be taken on the packet. If a certain rule is triggered by a packet, then the action taken is decided by the rule type. Rule types include:

  • Alert: trigger an alert
  • Log: log the packet information
  • Drop: drop the packet (only in Inline Intrusion Prevention mode)

This is barely scratching the surface. Get more information than you probably need about Snort here.

What is a Snort Rule?

A Snort rule is a specification of what to do when a packet fulfills certain criteria. This is the basic outline of a rule:

[action][protocol][sourceIP][sourceport] -> [destIP][destport] ( [rule options] )

Let’s take a look at an example rule to solidify our understanding:

alert tcp any any -> any any (msg:”Facebook traffic detected”; content:”www.facebook.com";)

  • The action is alert: This tells Snort to trigger an alert if this rule is matched.
  • The protocol is tcp: This tells Snort to apply the rule against TCP traffic.
  • The source IP is any: This tells Snort to apply the rule against packets originating from any IP.
  • The sourceport is any: This tells Snort to apply the rule against packets originating from any port.
  • The destIP is any: This tells Snort to apply the rule against packets which are destined to go to any IP.
  • The destport is any: This tells Snort to apply the rule against packets destined for any port.

The rule options section can contain lots of different fields. In this example, we have:

  • Msg: What message to send when the rule matches a packet?
  • Content: Rule is triggered only if specified content is found in the packet. In this case we want to trigger the rule if the packet contains “www.facebook.com”.

For more information check here.

Interlude

Okay, now we have a good idea of what Snort is. But Snort by itself doesn’t give us much utility. How can we be notified of the findings made by Snort? We could check logs periodically, but that’s not very effective. What if some suspicious traffic entered the network at 3 a.m.? We would never know. We need some way to be notified of a finding immediately. And what if we want to review findings for auditing purposes, or to figure out trends?

We need additional “helper” applications to achieve this. The following sections will go over all the additional apps/tools that we will create and use for the full setup.

Barnyard2

Imagine we want Snort to write its findings to a database. Snort is meant to analyze millions of packets. If it was responsible for writing stuff to a database too, it would be bogged down by that and miss a few — or a few million — packets.

This is where Barnyard2 comes into play. Snort will be set up to output in a binary format — which is very fast compared to outputting to a database. Barnyard2 is a spooler that handles the responsibility of interpreting the Snort binary data and writing it to a database.

I also set up Barnyard2 to send alerts to Slack on behalf of Snort. All of this setup will be detailed in upcoming sections.

Note: I believe Snort v3 does not require such a spooler because it is multi-threaded and can handle writes via a different thread.

RSyslog

Rsyslog is a message forwarding system. When Barnyard2 reads an alert from the Snort output, it sends a corresponding log message to the RSyslog server which in turn sends it to a Slack channel.

MySQL

Barnyard2 will write the Snort data to a MySQL database. I will show you how I created a CloudSQL instance in GCP, and a proxy deployment running in the cluster that exposes this DB to the Barnyard2 instance.

BASE

Basic Analysis and Security Engine (BASE) is an open source tool that provides a web GUI to analyze and review the findings from Snort. It is a web application that network admins can use to browse past findings and build queries to filter through findings. There’s not much documentation on BASE as it is a legacy project. I am sure there are more recent and well-maintained front-ends. I chose BASE because it was simple to deploy and open source.

Deployment Architecture

Okay, now we know all the applications and services we need to create and deploy. Let’s take a look at how all these pieces should be placed. Here’s an illustration of the cluster and surrounding components after we have everything up and running:

  • The large light blue squares are the nodes in the cluster
  • Each node in the cluster has one network interface, called eth0. This is represented by the small dark blue squares.
  • We want a Snort instance to listen to every interface on every node in the cluster. So we want to deploy Snort as a daemonset. The Snort pods are represented by the red ovals.
  • The other components (Rsyslog, BASE, and DB proxy) are deployments — we don’t care which nodes the pods running these applications get deployed to.
  • The smaller teal ovals represent other pods that may be running on the cluster.
  • The Cloud represents the CloudSQL instance which will store all the Snort findings.

You might be wondering where Barnyard2 is. Barnyard2 will run as a side container in the Snort pod. This is required because Barnyard2 needs access to the Snort config. So we can utilize shared volumes to allow the Snort container and Barnyard2 container to have access to the same files!

Conclusion

That’s it for part one! In the next post, we will go over how to containerize each of the applications mentioned above. See you then!

In the meantime, if you’re interested in solving interesting problems alongside a passionate group of engineers, apply to join our team and help us build the future of conversational AI!

--

--