How to monitor distributed logs in Kubernetes with the EFK stack.

No more searching endlessly for the correct logs.

You need a few things.

  1. An existing Kubernetes Cluster.
  2. kubectl binary locally installed

Getting a Kubernetes Cluster

There are a multitude of ways for getting a Kubernetes cluster setup, but I find the easiest just to use a DigitalOcean managed cluster. They already have all the networking and storage configured and all you have to do is create and download your kubeconfig

Image for post
Image for post

Installing kubectl

Checkout the up-to-date Kubernetes docs for installing kubectl

Create a project Directory

We’ll want a place to store all of our Kubernetes manifests to be re-applied to a new cluster later or to recreate this one.

$ git init
$ echo "# EFK Tutorial" >> README.md
$ git add README.md
$ git commit -m "Initial commit"

Deploy a workload which generates logs

If you have got an existing workload running which generates logs, you can skip this part, as you’ll be collecting your own logs.

# ./random-generator.yml
# The namespace for our log generator
kind: Namespace
apiVersion: v1
metadata:
name: random-generator
---
# The Deployment which will run our log generator
apiVersion: apps/v1
kind: Deployment
metadata:
name: random-generator
namespace: random-generator
labels:
app: random-generator
spec:
selector:
matchLabels:
app: random-generator
template:
metadata:
labels:
app: random-generator
spec:
containers:
- name: random-generator
imagePullPolicy: Always
# You can build the image off the source code and push to your own docker hub if you prefer.
image: chriscmsoft/random-generator:latest
$ kubectl apply -f random-generator.yml
namespace/random-generator created
deployment.apps/random-generator created
$ kubectl logs deploy/random-generator -n random-generator
{"name": "Siovaeloi, Protector Of The Weak"}
{"name": "Qandocruss, Champion Of The White"}
{"name": "Frarvurth, The Voiceless"}
[...]

Setup the directory structure

The completed directory structure will look more or less like this

tree
.
├── README.md
├── logging
│ ├── elasticsearch
│ │ ├── service.yml
│ │ └── statefulset.yml
│ ├── fluentd
│ │ └── daemonset.yml
│ ├── kibana
│ │ ├── deployment.yml
│ │ └── service.yml
│ └── namespace.yml
└── random-generator.yml
4 directories, 8 files
$ mkdir -p logging
$ cd logging
# logging/namespace.yml
kind:
Namespace
apiVersion: v1
metadata:
name:
logging
$ kubectl apply -f namespace.yml
namespace/logging created
$ kubectl get namespaces
NAME STATUS AGE
[...]
logging Active 9s
random-generator Active 106m

Deploy ElasticSearch

ElasticSearch is where our log data will be stored. So this we need first.

$ mkdir -p elasticsearch
$ cd elasticsearch
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: logging
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
protocol: TCP
- containerPort: 9300
protocol: TCP
volumeMounts:
- name: elastic-data
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: kubernetes-logging
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "elasticsearch-0.elasticsearch"
- name: cluster.initial_master_nodes
value: "elasticsearch-0"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elastic-data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elastic-data
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: do-block-storage
resources:
requests:
storage: 10Gi
$ kubectl apply -f statefulset.yml
statefulset.apps/elasticsearch created
$ kubectl rollout status statefulset/elasticsearch -n logging
Waiting for 1 pods to be ready...
partitioned roll out complete: 1 new pods have been updated...
$ kubectl get pods -n logging
NAME READY STATUS RESTARTS AGE
elasticsearch-0 1/1 Running 0 7m27s
kind: Service
apiVersion: v1
metadata:
name: elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
$ kubectl apply -f service.yml
serviservice/elasticsearch created
$ kubectl port-forward svc/elasticsearch 9200 -n logging
Image for post
Image for post

Next we’ll add Kibana

Kibana is probably the simplest to setup.

# Change back to your logging directory first
$ cd ../
$ mkdir kibana
$ cd kibana
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.2.0
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200
ports:
- containerPort: 5601
$ kubectl apply -f deployment.yml
deployment.apps/kibana created
$ kubectl rollout status deploy/kibana -n logging
[...]
deployment "kibana" successfully rolled out
$ kubectl get pods -n logging
NAME READY STATUS RESTARTS AGE
elasticsearch-0 1/1 Running 0 32m
kibana-67f95cc5f4-pqbwt 0/1 ContainerCreating 0 28s
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
ports:
- port: 5601
selector:
app: kibana
$ kubectl apply -f service.yml
service/kibana created
$ kubectl port-forward svc/kibana 5601 -n logging
Image for post
Image for post

Next we add Fluentd

Fluentd will grab the logs from all your containers and push them into ElasticSearch, so you can view them in Kibana. You see how this whole thing works ?

$ cd ../
$ mkdir fluentd
$ cd fluentd/
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: logging
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: logging
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: logging
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd

tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
resources:
limits:
memory: 512Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
$ kubectl apply -f daemonset.yml
daemonset.apps/fluentd created
$ kubectl rollout status daemonset/fluentd -n logging
Waiting for daemon set spec update to be observed...
Waiting for daemon set "fluentd" rollout to finish: 1 out of 2 new pods have been updated...
Waiting for daemon set "fluentd" rollout to finish: 0 of 2 updated pods are available...
Waiting for daemon set "fluentd" rollout to finish: 1 of 2 updated pods are available...
daemon set "fluentd" successfully rolled out

Setup index pattern in Kibana

Port-forward Kibana again

$ kubectl port-forward svc/kibana 5601 -n logging
Image for post
Image for post
Give it a few seconds
Image for post
Image for post
Image for post
Image for post
Image for post
Image for post
Image for post
Image for post
Image for post
Image for post

Checking logs in Kibana

You should now be able to see all your logs in Kibana.

Image for post
Image for post
Image for post
Image for post

Searching for only random generator pods

In the search bar, search only for our random generator containers by entering kubernetes.container_name : random-generator in the search bar

Image for post
Image for post
Image for post
Image for post

It scales too

What happens when we scale the random generator to say 10 pods ?

$ kubectl scale deploy/random-generator -n random-generator --replicas 10
deployment.extensions/random-generator scaled
$ kubectl rollout status deploy/random-generator -n random-generator
Waiting for deployment "random-generator" rollout to finish: 1 of 10 updated replicas are available...
Waiting for deployment "random-generator" rollout to finish: 2 of 10 updated replicas are available...
Waiting for deployment "random-generator" rollout to finish: 3 of 10 updated replicas are available...
Waiting for deployment "random-generator" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "random-generator" rollout to finish: 5 of 10 updated replicas are available...
Waiting for deployment "random-generator" rollout to finish: 6 of 10 updated replicas are available...
Waiting for deployment "random-generator" rollout to finish: 7 of 10 updated replicas are available...
Waiting for deployment "random-generator" rollout to finish: 8 of 10 updated replicas are available...
Waiting for deployment "random-generator" rollout to finish: 9 of 10 updated replicas are available...
deployment "random-generator" successfully rolled out
Image for post
Image for post

Written by

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store