Docker Logging with Filebeat, Elasticsearch and Kibana

Shishir Lakkadi
3 min readMar 21, 2022

--

As part of my journey on logging with ELK, the next goal is to integrate it with Docker so that we can store not only the application logs but logs of all the containers. To achieve this task, I will be employing Filebeat. It ships with modules for observability and security data sources that simplify the collection, parsing, and visualization of common log formats down to a single command. The logs are directly shipped to Elastic Cloud and can be viewed on the Kibana dashboard.

This task can be divided into three steps:

  • Deploy Elastic Cloud
  • Configure Filebeat Docker Image
  • Putting it together

Step 1: Deploy Elastic Cloud

For this step, you can view my previous article(step 3) to setup a deployment for Elastic Cloud.

Step 2: Configure Filebeat Docker Image

To gather docker logs, Filebeat needs to be running as an image. Let’s start by creating a new folder in the directory of your choice. Inside that directory, create the docker-compose.yml. Replace the environment variables with the Elastic Cloud Host and Elastic Cloud Password respectively.

version: "2.4"
services:
filebeat:
image: docker.elastic.co/beats/filebeat:8.0.1
build: filebeat
container_name: filebeat
hostname: mydockerhost
restart: unless-stopped
environment:
- ELASTICSEARCH_HOSTS=elastic_host
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=elastic_password
labels:
co.elastic.logs/enabled: "false"
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
- type: bind
source: /var/lib/docker
target: /var/lib/docker
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "2"

In the file, I have co.elastic.logs/enabled: “false” for the filebeat container to prevent the logs generated from this container from being ingested.

The Logging driver has been set to JSON-file. In the future, if the labels is set to true the logs generated would store as a json file format.

The next step would be to create a filebeat.yml file. This file specifies where to pick up the logs, do any preprocessing and send it to ElasticSearch for storage.

The filebeat.docker.yml is as follows:

filebeat.inputs:
- type: container
paths:
- '/var/lib/docker/containers/*/*.log'
processors:
- add_docker_metadata: ~
output.elasticsearch:
hosts: ["elastic_search_host"]
username: "elastic_username"
password: "elastic_password"

I have implemented a basic version that reads all the container logs in the docker space and sends them to ElasticSearch. The filebeat.inputs define where the data is being read from. In this case, the type specified is of the container. And all the container logs within docker will be sent to the ELK stack for storage. For more filebeat input options, you can read this page.

Moreover, once the data is read from the path specified. Some processing can be done, the currently nothing is being added as I’m doing a bare-bones implementation.

For more information on how to preprocess inputs, the documentation is available here.

With this, the Filebeat image and the configuration has been set up. Let’s go ahead and create a Dockerfile in the same directory. This builds the above customized filebeat container and uses the filebeat docker image provided by Elastic.

  FROM docker.elastic.co/beats/filebeat:9.0.1
COPY filebeat.docker.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN chmod go-w /usr/share/filebeat/filebeat.yml

Step 3: Putting it together

For testing purposes, I have added another image for Kibana which generates log that can be read by filebeat. Add the following to the docker-compose.yml.

kibana:
image: docker.elastic.co/kibana/kibana:7.9.1
container_name: kibana
restart: unless-stopped
environment:
- 'ELASTICSEARCH_HOSTS=["http://elasticsearch:9200"]'
- "SERVER_NAME=localhost"
- "XPACK_MONITORING_ENABLED=false"
ports:
- "5601:5601"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "2"

This is bound to fail as I have no Kibana server running locally and will generate failure logs.

To build the images, run the following command:

docker-compose up --build

This command builds Filebeat and Kibana images. To look at the logs go to the Kibana dashboard which can be accessed via the settings page for the elastic deployment. For further guidance, you can read Step 4 from my previous article.

The logs should be visible here.

To further process data, filebeat can send the logs to logstash which can be configured to process and then eventually send it to ElasticSearch.

The complete code can be found here

Thank you for reading! ❤

--

--