Logging with Fluentd, ElasticSearch, and Kibana (EFK) on Kubernetes on Azure

Tim Park
4 min readMar 27, 2018

--

NOTE: I wrote this originally over a year ago — and its still a great entry to understand how to install the EFK stack “the hard way” with lots of detail — but for actual day to day devops I would instead recommend that you have a look at our bedrock open source project that automates the installation and configuration of this and other cloud native infrastructure.

I recently setup the Elasticsearch, Fluentd, Kibana (EFK) logging stack on a Kubernetes cluster on Azure. Since there wasn’t a walkthrough for this when I did it, I took notes with the hopes that this is helpful to others.

This logging stack is one of the most popular combinations in terms of open platforms. In fact, I would say the only debate is around the mechanism used to do log shipping, aka the F (fluentd), which is sometimes swapped out for L (logstash). Otherwise, Kibana visualizing Elasticsearch indexed data seems to be the dominate pattern, and after playing with the stack, I can see why.

The good news is that it is quite easy to set up this stack on Kubernetes. I’m going to assume that you have a Kubernetes cluster at hand for this walkthrough. If you don’t, take a moment to spin one up.

I first tried the various helm charts for each of these three platforms but did not have much success with any of them. Instead, I discovered that the Kubernetes project itself manages a complete and up to date set of resource definitions for these, so I started with that:

$ git clone https://github.com/kubernetes/kubernetes
$ cd kubernetes/cluster/addons/fluentd-elasticsearch
$ ls
es-image es-statefulset.yaml fluentd-es-ds.yaml kibana-deployment.yaml OWNERS README.md
es-service.yaml fluentd-es-configmap.yaml fluentd-es-image kibana-service.yaml podsecuritypolicies

The good news is there is only a few modifications that you need to make and one addition that I’d recommend.

The first modification you need to make is to add volumeClaimTemplates to es-statefulset.yaml. Adjusting the storage size requested if your incoming log data is larger or smaller. I also am using the managed-premium storageClass that is bundled along with the Kubernetes cluster I created with ACS Engine, but replace this if necessary with your desired storage class.

volumeClaimTemplates:
- metadata:
name: elasticsearch-logging
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: managed-premium
resources:
requests:
storage: 64Gi

Next, delete the previous emptyDir definition of a volume, since our volumeClaimTemplate above supersedes it.

      volumes:
- name: elasticsearch-logging
emptyDir: {}

I also increased my number of replicas from 2 to 6. I’d recommend starting with at least 3 servers in your cluster.

   replicas: 6

I also deleted the following couple of lines from fluentd-es-ds.yaml

      nodeSelector:
beta.kubernetes.io/fluentd-ds-ready: "true"

This enables you to target which nodes you have fluentd collect and ship logs from. I wanted this to happen on all nodes and not have to label them specifically to have that happen.

With this, you are ready to spin up the three parts of the stack:

$ kubectl create -f es-statefulset.yaml
$ kubectl create -f es-service.yaml
$ kubectl create -f fluentd-es-configmap.yaml
$ kubectl create -f fluentd-es-ds.yaml
$ kubectl create -f kibana-deployment.yaml
$ kubectl create -f kibana-service.yaml

With this, you should see a fluentd pod spun up on each node of your cluster, the appropriate number of elasticsearch-logging pods spun up, and a single kibana pod.

I added one more element to this: es-curator. This optional add-on performs automatic time based culling of your log data. I am not going to go through the installation of this in detail, but link out to this excellent blog entry around the installation of this stack on AWS that covers it about mid way down.

I had one issue during my installation of Elasticsearch. For whatever reason, Elasticsearch did not automatically assign missing shards to a server, and it was necessary to shell into one of the elasticsearch pods and then turn on automatic shard assignment:

$ kubectl exec -it elasticsearch-logging-0 -n kube-system -- bash
$ curl -XPUT 'http://elasticsearch-logging:9200/_cluster/settings' -d '{ "transient": { "cluster.routing.allocation.enable": "all" } }'

With all of this installed, you should have a functional log management system running. Kibana actually runs as a part of the Kubernetes cluster user interface, so you can do:

$ kubectl proxy
$ google-chrome http://localhost:8001/api/v1/namespaces/kube-system/services/kibana-logging/proxy

And the Kibana user interface should appear. Logs should have already been shipped to Elasticsearch by Fluentd, so you should be able to go from there and fire up your first search against them. Here are my favorite two so far:

kubernetes.namespace_name:"my-namespace"  (queries logs from a ns)
kubernetes.host:"k8s-agents-32616713-0" (useful for node issues)

I hope you found this end to end walkthrough helpful. If you have feedback or like content like this, feel free to reach out or follow me on Twitter.

--

--