PKS/CFCR Logging with Elasticsearch+Fluentd+Kibana

Kubernetes provides documentation on setting up Elasticsearch as a logging target, however its not particularly easy to follow or run the examples. This repository is purpose built to set up the Elasticsearch Fluentd Kibana (EFK) stack on Kubernetes clusters built by Pivotal Container Service (PKS) or its open source equivalent Cloud Foundry Container Runtime (CFCR) replacing the default configs from the Kubernetes documentation with settings that match the Bosh installed environment.

You can find the Kubernetes manifests at the following github repo


  • A working PKS or CFCR Kubernetes cluster (must have privileged containers enabled)
  • kubectl


Clone this git repo:

$ git clone
$ cd cfcr-efk


The Elasticsearch Operator manages ES clusters on Kubernetes.

Install the elasticsearch operator in the elasticsearch namespace:

$ kubectl create namespace elasticsearch
namespace "elasticsearch" created
$ kubectl -n elasticsearch apply -f es-operator
serviceaccount "elasticsearch-operator" created
clusterrole "elasticsearch-operator" created
clusterrolebinding "elasticsearch-operator" created
deployment "elasticsearch-operator" created

Elasticsearch + Kibana

Install Elasticsearch using the Operator:

$ kubectl apply -f elasticsearch
elasticsearchcluster "efk-es-cluster" created

Wait a few minutes and check that its running:

$ kubectl -n elasticsearch get pods
cerebro-efk-es-cluster-677ffb476c-qk28j 1/1 Running 0 3m
elasticsearch-operator-797d46bb6b-8rsc9 1/1 Running 0 7m
es-client-efk-es-cluster-5c9d99c9f6-2z2lm 1/1 Running 0 3m
es-client-efk-es-cluster-5c9d99c9f6-56tcm 1/1 Running 0 3m
es-client-efk-es-cluster-5c9d99c9f6-qxcgw 1/1 Running 0 3m
es-data-efk-es-cluster-default-0 1/1 Running 0 3m
es-data-efk-es-cluster-default-1 1/1 Running 0 3m
es-data-efk-es-cluster-default-2 1/1 Running 0 3m
es-master-efk-es-cluster-default-0 1/1 Running 0 3m
es-master-efk-es-cluster-default-1 1/1 Running 0 3m
es-master-efk-es-cluster-default-2 1/1 Running 0 3m
kibana-efk-es-cluster-5c96c8ccdc-c87g4 1/1 Running 0 3m


Install fluentd into the kube-system namespace:

$ kubectl apply -f fluentd/
configmap "fluentd-es-config-v0.1.4" created
serviceaccount "fluentd-es" created
clusterrole "fluentd-es" created
clusterrolebinding "fluentd-es" created
daemonset "fluentd-es-v2.0.4" created

The fluentd DaemonSet will only run on nodes that have the label We can set this on all existing nodes like so:

$ kubectl label nodes `kubectl get nodes -o jsonpath='{.items[*]}'`
node "vm-3ac9e496-f766-43be-6cfc-1e432cb80d2e" labeled
node "vm-8748d9ff-d25c-4abb-56e7-fd7e30afd6f6" labeled
node "vm-d204ead2-e57c-4919-5ed9-252268f82ff6" labeled

Access Logs via Kibana

Next forward a port with kubectl to access kibana. You could expose it with a service or ingress, but then you'd want to secure it first which is outside the scope here.

$ kubectl -n elasticsearch get pods | grep kibana
kibana-efk-es-cluster-5c96c8ccdc-c87g4 1/1 Running 0 23m
$ kubectl -n elasticsearch port-forward kibana-efk-es-cluster-5c96c8ccdc-c87g4 5601:5601
Forwarding from -> 5601

Open from your browser and it should go to a setup page where you have to select your pattern logstash-* and time field name @timestamp and click create.

From there you can hit the Discover window and see any logs that have already been collected which should include any running Pods as well as logs from kubelet, kube-proxy, etc.