Kubernetes — Audit logging with the elastic stack

Aaron Pejakovic
ELMO Software
Published in
4 min readMay 9, 2022

In the previous article, I discussed how to authenticate to your Kubernetes cluster using keycloak. You might be thinking… Wow, we now have secure authentication to the cluster, we are safe!… However just because you have authentication, it doesn’t mean people who are authenticated can’t perform harmful actions within the cluster, whether that is on purpose or by accident. This is where audit logging becomes a powerful tool for any DevSecOps team. We will discuss how you can quickly configure the Elastic Stack (Elasticsearch, Filebeat, and Kibana) on Kubernetes to store and visualize these audit logs.

Prerequisites

Installing Elasticsearch

Elasticsearch is an open search engine for all types of data. Elasticsearch can be installed using one of the following examples from the elastic helm charts repo:

For the purpose of this article, we will install the default chart using this command:

helm upgrade --wait --timeout=1200s --install es-audit elastic/elasticsearch

You can now see 3 containers running and also access the elasticsearch using:

kubectl port-forward svc/elasticsearch-master  9200:9200

Installing Kibana

Kibana is a visualization tool that connects to elasticsearch. We can also install kibana using one of the examples in this repo:

We will just install the default which will automatically connect to the default elasticsearch installation from above:

helm upgrade --wait --timeout=1200s --install kibana-audit elastic/kibana

You will now see a kibana pod and also be able to access kibana with the following command:

kubectl port-forward svc/kibana-audit-kibana  5601:5601

Installing Filebeat

Filebeat is a lightweight shipper for logs and files. Filebeat is what runs on every node within our Kubernetes clusters and gathers the logs from the audit files and ships them to elasticsearch. Filebeat is installed as a daemonset on the Kubernetes cluster, which means it will run one filebeat container on every node in the cluster. This is done to ensure all logs are picked up from all nodes. The main nodes we care about are the master nodes because this is where the Kubernetes audit log files reside.

We can’t install filebeat with the default config because we need to instruct it to pick up our audit files. When installing the helm chart we can use the below values file:

---
daemonset:
extraEnvs: []
filebeatConfig:
filebeat.yml: |
filebeat.inputs:
- type: filestream
id: my-filestream-id
paths:
- /var/log/kube-apiserver-audit*
parsers:
- ndjson:
keys_under_root: true
output.elasticsearch:
hosts: ["elasticsearch-master:9200"]
protocol: http
secretMounts: []
tolerations:
- operator: "Exists"

This command can be used to install filebeat and pass the values file from above in:

helm upgrade --wait --timeout=1200s --install filebeat-audit elastic/filebeat -f ./values.yaml

You should now be able to see the filebeat container running. The number will vary based on how many nodes you have in your cluster:

Conclusion

Once all the above is installed you will be able to see JSON parsed logs in the kibana console:

During the event of an incident or issues with the clusters, these logs will allow you to visualize any actions taken by a user in the Kubernetes cluster. The logs can also be useful to meet auditing requirements by certain frameworks.

Next Steps — Kibana allows for some really powerful dashboards to be built. I would advise anyone who is interested to try and create some visualizations that can be included in a dashboard.

--

--