Container monitoring using Splunk

Docker has a ‘logs’ command docker logs Container_ID that will fetch the logs from a container. We can run this via the docker daemon on our host and it captures all the stdout/stderr from the running container. But this approach does’t really work very well if we want to stream all our Docker logs to a centralized logging server. Hence we need some mechanism to facilitate log centralization, that is organizing logs from servers, applications, routers, containers and more into one central location. Splunk is an excellent solution to solve this problem.

In this post, I will describe the process of using Splunk to retrieve the docker container logs. Other than log centralization, Splunk provides following benefits too.

  • Insanely fast search: whether we’re searching keywords, key value pairs or regex patterns, get results faster than ever.
  • Easy to read results: can create custom tags for spotting important events; can view logs in raw format or in table view for easy interpretation.
  • Any data format: whether the data is in structured JSON or mysterious plain text, it’s easy to send to log entries for immediate search.

Splunk as a Logging Driver

Docker includes multiple logging mechanisms to help us get information from running containers and services. These mechanisms are called logging drivers. Each Docker daemon has a default logging driver, which each container uses unless we configure it to use a different logging driver.

When we start a container, we can configure it to use a different logging driver than the docker daemon’s default. To find the current default logging driver for the Docker daemon, run docker info and search for Logging Driver. You can use the following command on Linux or macOS:

$ docker info |grep 'Logging Driver'Logging Driver: json-file

Splunk is one such logging driver, which writes log messages to Splunk using HTTP Event Collector (HEC).

The splunk logging driver sends container logs to HTTP Event Collector in Splunk Enterprise and Splunk Cloud.

HEC is a fast and efficient way to send data to Splunk Enterprise and Splunk Cloud. It enables us to send data over HTTP (or HTTPS) directly to Splunk Enterprise or Splunk Cloud from a particular application. Also, HEC is token-based, so we never need to hard-code the Splunk Enterprise or Splunk Cloud credentials in our app or supporting files. HTTP Event Collector provides a new way for developers to send application logging and metrics directly to Splunk Enterprise and Splunk Cloud via HTTP in a highly efficient and secure manner.

With Splunk 6.3, they introduced HEC which offers a simple, high volume way to send events from applications directly to Splunk Enterprise and Splunk Cloud for analysis. What we really mean by direct is without needing a local forwarder, that is sending from clients living outside the corporate network. Hence HEC is a token-based JSON API for sending events to Splunk from anywhere without requiring a forwarder. It is designed for performance and scale. Using a load balancer in front, it can be deployed to handle millions of events per second. It is highly available and it is secure. It is easy to configure and also easy to use. HEC makes it possible to collect logs including logs from Docker too. If you are a developer looking to get visibility into your applications within Splunk, looking to capture events from external systems and devices, or you offer a product that you’d like to integrate with Splunk, HTTP Event Collector is the way to go.

Splunk for container monitoring

Use of containers is on the rise, and with good reason. They enable us to develop, deploy and scale an application anywhere at anytime. That on the other hand helps us deliver better code and the best experience to the end users.

Containers offer a portability that wasn’t possible in traditional IT applications. Based on underlying Linux kernel technology, this portability abstracts the complexity of the compute layer, OS and application stack, ensuring that an application always runs the same, no matter what environment it’s in. This enables developers to focus on what’s most important, the application itself. Containers also increase speed and flexibility, as they can be spun up in seconds. This helps us build, configure, test, deploy, update and migrate our apps faster and more easily.

Containers are great. But there are lots of challenges. They can make it harder to monitor performance and logs. And if we can’t find the source of errors and performance issues, it’s difficult to maximize both agility and speed, while maintaining high service reliability. In addition, containers can have a short lifespan, sometimes only seconds. That makes the traditional log capture capabilities difficult and irrelevant to effectively perform container monitoring and troubleshooting. To help us effectively run and develop applications using containers, the IT solutions must have the ability to index, search and correlate container based data with other data sources for better service context, root-cause analysis, monitoring and reporting. Furthermore, container monitoring must be easy to implement and integrated with both the container deployments and the IT operations monitoring solution.

Using Splunk software, we can leverage a single solution to:

  • Monitor and analyze container data and enable IT operations analytics.
  • Monitor container performance to ensure containers are available, and that issues are fixed quickly with minimal effort.
  • Help you gain insight on container resource usage, cluster capacity and the service impact of increasing cluster use for a specific service.
  • Gain better service context and accelerate root-cause analysis by indexing, searching and correlating container-based data with data from the entire technology stack.

Getting started

Splunk allows 500Mb/day of ingestion for free and that’s plenty to get started with. What I’ll be explaining here is how we integrate Docker logging into Splunk.

This can be achieved using the Docker Splunk Enterprise image which you can find at This official repository for the Splunk Enterprise contains Dockerfiles that we can use to build Splunk Docker images.

What is Splunk Enterprise?

Splunk Enterprise is the platform for operational intelligence. The software lets us collect, analyze and act upon the untapped value of big data that the technology infrastructure, security systems, and business applications generate. It gives us insights to drive operational performance and business results.

Why run Splunk in Docker?

  • Reduce Management Costs
  • Time to Value
  • High Available
  • Reduce time to Upgrade
  • Simplified Rollback
  • Standard Configurations
  • Easier to Support

Configuring splunk

This I tried with two methods. They are as follows.

  1. Using the Splunk Enterprise Docker Image
  2. Configuring the Splunk Enterprise Docker container with docker-compose

Let me describe each method in detail.

Method 1: Using the Splunk Enterprise Docker Image

Follow these steps to get a working instance of splunk.

  1. Download and install Docker on your system.
  2. Open a shell prompt or terminal window.
  3. Enter the following command to pull the Splunk Enterprise image.
docker pull splunk/splunk

4. Run the Docker image.

docker run -d -e “SPLUNK_START_ARGS=--accept-license” -e “SPLUNK_USER=root” -p “8000:8000” -p “8088:8088” splunk/splunk

At this step, make sure that the docker container exposes the following network ports properly.

  • 8000/tcp - Splunk Web interface
  • 8088/tcp - HTTP Event Collector
  • 8088/tcp - Splunk Services

5. Access the Splunk instance with a browser by using the Docker machine IP address and Splunk Web port. For example, ‘http://localhost:8000’.

You can get the new Splunk Logging Driver after installing Docker version 1.10 and higher. Note if you are running on OSX or Windows you’ll need to have a dedicated Linux VM. Using the driver, you can configure your host to directly send all logs sent to stdout to Splunk Enterprise or to a clustered Splunk Cloud environment.

Now login to Splunk.

Splunk login page

After successful login, you will be navigated to the home page.

Splunk home page

Method 2: Configuring the Splunk Enterprise Docker container with docker-compose

Follow these steps to get a working instance of splunk.

  1. Download and install docker-compose on your system.
  2. At a shell prompt, create a text file docker-compose.yml .
  3. Open docker-compose.yml for editing.
  4. Insert the following block of text into the file.
version: ‘2’
image: busybox
— /opt/splunk/etc
— /opt/splunk/var
#build: .
hostname: splunkenterprise
image: splunk/splunk:6.5.2
SPLUNK_START_ARGS: --accept-license
SPLUNK_ADD: tcp 1514
— vsplunk
— “8000:8000”
— “9997:9997”
— “8088:8088”
— “1514:1514”

5. Save the file and close it.

6. Run the docker-compose utility in the same directory.

docker-compose up

7. Then access the Splunk instance with a browser by using the Docker machine IP address and Splunk Web port. For example, ‘http://localhost:8000’.

Logs by Container-Splunk Logging Driver,

Configure an HEC

First, you need to enable the Splunk HTTP Event Collector. In the Splunk UI, go to Settings -> Data Inputs -> HTTP Event Collector -> Global Settings.

Click Enabled alongside ‘All Tokens’, and enable SSL. This will enable the HTTP Event Collector on port 8088 (the default), using the Splunk default certificate.

App: Docker Overview

From the Splunk home page, you can access the Docker Overview dashboard by clicking on blue colored box: Docker in the left side bar.

Docker Overview

You are not able to find any data here, that’s because you haven’t configured any container to use Splunk logging driver. Let’s do that then.

Let’s run a hello-world image by specifying that to use Splunk logging driver.

docker run --log-driver=splunk \
--log-opt splunk-url=<splunk-url> \
--log-opt splunk-token=<splunk-token> \
--log-opt splunk-insecureskipverify=<splunk-insecure-skip-verify> \

Here is more detail on the settings above:

  • log-driver=splunk specifies that I want to use the Splunk logging driver.
  • splunk-token refers to Splunk HTTP Event Collector token.
  • splunk-url is set to the the host (including port) where the HTTP Event Collector is listening. This is the path to the Splunk Enterprise.
  • splunk-insecureskipverify instructs the driver to skip cert validation, as my Splunk Enterprise instance is using the default self-signed cert.
  • Lastly I’ve told Docker to use the hello-world image.

Let’s see the output.

Here, you can find one instance of hello-world docker image running. You can click on the container and view the logs. You can also see the events pouring in real time.

Container logs

These are just the basics. I can also configure the Splunk Logging Driver to include more detailed information about the container itself, something which is very useful for analyzing the logs later.

docker run --label type=test \
--label location=home \
--log-driver=splunk \
--log-opt splunk-url=<splunk-url> \
--log-opt splunk-token=<splunk-token> \
--log-opt splunk-insecureskipverify=<splunk-insecure-skip-verify> \
--log-opt tag="{{.ImageName}}/{{.Name}}/{{.ID}}" \
--log-opt labels=type,location

These additional options do the following.

  • label-defines one or more labels for the container
  • labels-defines which labels to send to the log driver and which will be included in the event payload
  • tag-changes how my container will be tagged when events are passed to the Splunk Logging Driver

Let’s look at the event logs for a while.

As you can see above, each event now has a dictionary of attributes which contains the labels in the driver configuration (this can also include list of environment variables). Tag has also been changed with the format I specified.

Creating custom indices

Let’s proceed one step further.

I can now add additional configuration to control how Splunk indexes the events, including changing default index, source and sourcetype. For example, unless we specify any index, this uses the ‘main’ index by default.

Let’s create a custom index and a token and retrieve container logs through them.

Create a new index

  • Go to settings -> indexes. Here you can find the existing indexes.
  • Go to New index
  • Complete the entries and save the index. (Use default values).

Note: Here, I have described the process of creating indices through Splunk UI. You can also do that by adding them to indexes.conf file which resides at opt/splunk/etc/apps/app-docker/local/indexes.conf.

Create a new token

  • Go to settings -> Data inputs-> HTTP Event Collector. Here you can find the existing tokens.
  • Go to New Token. Then Add Data.
  • Select source.(test-token)
  • Input settings. Select the index(test) that you have created earlier. Results will be sent to the new index then.You can also set the source type. For eg: json_no_timestamp.
  • Review.
  • Submit and you are ready to go. You can now find the new token.

Use the custom index

  • Specify the new token in the docker run command.
docker run --log-driver=splunk \
--log-opt splunk-url= \
--log-opt splunk-token=23954D7C-05B3-44A8-A043-3C419E9C1ED8 \
--log-opt splunk-insecureskipverify=<splunk-insecure-skip-verify> \
  • Search. Go to ‘Search’ dashboard and type index= “test”. Then you will get the respective container logs.

In this post, I have described my findings on learning Splunk for container monitoring. If you want more information on this, please refer

Hope this post helped you learn the basics of Splunk.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade