Getting started with EFK (Fluent Bit, Elasticsearch and Kibana) stack in Kubernetes

Centralised logging is an essential part of your system regardless if it's a micro-services or a monolith platform. It helps your development team quickly observe, search and aggregate their application logs through a web browser with a few button click.

Kibana Dashboard

This post will talk about how you can quickly aggregate container logs from your Kubernetes pods and view them from a Kibana dashboard. I’m assuming that you have a basic knowledge in Kubernetes with the kubectl command-line tool.

Container Log Format and Log File

Using docker, I have learned that containers store their log files in the host’s directory /var/log/containers/ with a .log file extension. The log is formatted in JSON because Kubernetes nodes are configured with docker json logging driver.

The container log file will have like the following:

{"log":"<<CONTAINER LOG HERE>>","stream":"stdout","time":"<<TIMESTAMP HERE>>"}
{"log":"<<CONTAINER LOG HERE>>","stream":"stdout","time":"<<TIMESTAMP HERE>>"}
{"log":"<<CONTAINER LOG HERE>>","stream":"stdout","time":"<<TIMESTAMP HERE>>"}

I have also learned that I can check the logging driver used by docker by running docker info from your any of your Kubernetes node’s shell.

$ docker info
...
Logging Driver: json-file

Fluent Bit

You may have heard of Fluentd and the only difference between both are:

  • Fluentd is a log collector, processor, and aggregator.
  • Fluent Bit is a log collector and processor (it doesn’t have strong aggregation features such as Fluentd).

In most of my previous projects, most development team only needs a simple and real-time view of their application logs and create simple dashboards out of this data. There have been almost no requirements for extra filtering and analytics involved.

This is where Fluent Bit shines because of its tiny memory footprint and built-in aggregation for docker and kubernetes log files.

Elasticsearch and Kibana

Fluent Bit transfers logs to Elasticsearch. Elasticsearch will be the data store of the aggregated and parsed log files by Fluent Bit. This is commonly called document indexing. Each line of the container logs will be parsed as a time-stamped document.

Kibana, on the other hand, is the visualizer of those indexed data. You can simply view the logs here in real time and use filters to narrow down your search. You can also create graphs and dashboards out of those search.

Getting Started

Now that you have learned some fundamental concepts of the software that we are going to use, let’s get started with the deployment!

Prerequisites

  • An existing Kubernetes cluster. Use minikube for a local cluster.
  • kubectl command line for Windows, Linux or Mac.

Deploy Fluent Bit, Elasticsearch and Kibana

Create the Namespace

kubectl create namespace logging

Deploy Elasticsearch

Option 1: This is a non-production installation of Elasticsearch.
kubectl run elasticsearch \
--image=docker.elastic.co/elasticsearch/elasticsearch:6.3.2

kubectl expose deploy elasticsearch --port 9200

Option 2: Helm installation of Elasticsearch. We will disable persistence for simplicity. Warning this will consume a lot of memory in your cluster.

Source: https://github.com/helm/charts/tree/master/incubator/elasticsearch

helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
helm install --name elasticsearch incubator/elasticsearch \
--set master.persistence.enabled=false \
--set data.persistence.enabled=false \
--set image.tag=6.4.2 \
--namespace logging

Deploy Kibana

If you used Elasticsearch deployment Option 1:

helm install --name kibana stable/kibana \
--set env.ELASTICSEARCH_URL=http://elasticsearch:9200 \
--set image.tag=6.4.2 \
--namespace logging

If you used Elasticsearch deployment Option 2:

helm install --name kibana stable/kibana \
--set env.ELASTICSEARCH_URL=http://elasticsearch-client:9200 \
--set image.tag=6.4.2 \
--namespace logging

Deploy Fluent Bit

All step from here is mostly based on the Fluent Bit Kubernetes Deployment document and all files used are versioned in Github here.

Create the RBAC resources for Fluent Bit

kubectl apply -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-account.yaml

kubectl apply -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role.yaml

kubectl apply -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding.yaml

Create the Fluent Bit Config Map

This Config Map will be used as the base configuration of the Fluent Bit container. You will see keywords such as INPUT, OUTPUT, FILTER, and PARSER in this file.

kubectl apply -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-configmap.yaml

Deploy the Fluent Bit DaemonSet

Fluent Bit must be deployed as a DaemonSet, so on that way, Kubernetes will ensure that there is at least one FluentBit container running in each Kubernetes node.

If you used Elasticsearch deployment Option 1:

kubectl apply -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds.yaml

If you used Elasticsearch deployment Option 2:

You will have to download the YAML file first and modify the FLUENT_ELASTICSEARCH_HOST variable from elasticsearch to elasticsearch-client.

wget https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds.yaml
# modify as recommended, then:
kubectl apply -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-ds.yaml

Check if everything is running

kubectl get pods -n logging
NAME                             READY     STATUS    RESTARTS   AGE
elasticsearch-78987949dc-7wj8m 1/1 Running 0 1d
fluent-bit-2dv5n 1/1 Running 7 1d
kibana-6f75b4fdcf-9qbp7 1/1 Running 0 1d

Populate logs

Deploy an example Nginx container and port-forward the traffic to your localhost.

kubectl run nginx --image=nginx -n logging

kubectl port-forward nginx-8586cf59-kpbf6 8081:80 &

Curl it a few times, and press Ctrl+C when done.

while true; do curl localhost:8081; sleep 2; done

Viewing Logs in Kibana

Access Kibana quickly through port-forwarding

kubectl port-forward kibana-6f75b4fdcf-9qbp7 5601

Configure the index with logstash* using @timestamp as the Time field filter.

Go to Discover and you can now add your custom filters like the one in the screenshot below!

Kibana Dashboard

You development team should now be able to see logs in real time from here!

Summary

Seeing Logs in Real Time

You were able to successfully deploy a Fluent Bit DaemonSet in Kubernetes to aggregate logs and push them to Elasticsearch.

You were also able to see your Nginx container logs from Kibana by using custom filters.

Elasticsearch Deployment

For the sake of simplicity, you deployed an ephemeral Elasticsearch container to store Fluent Bit aggregated logs. In real life, you may opt in to use a managed Elasticsearch service, or build a highly available Elasticsearch cluster on your own (not recommended).

If your development team is not concerned about losing log data in their development environment, it makes sense to use a simple deployment and just plan for a better deployment strategy for the future.

You also have to know that Elasticsearch is not secured with authentication and you have to purchase a license to do so. If your only goal is to use Elasticsearch for simply just logs, you can use an opensource plugin like Search Guard for security.

Port Forwarding

We use port-forwarding a lot in this guide. In real life, you would use a Kubernetes Ingress to expose web application outside your cluster.

What To Do Next?

Aggregating, filtering, indexing and visualizing application logs has always been one of the pain points for most development team out there. Having the knowledge on how to do this easily for them will really make them happy. So, if you are running a Kubernetes cluster, try this out and let your team experience it, and for sure, you will gonna be throwing hi-fives and hugging each other.

References: