Kubernetes Logging with Filebeat and Elasticsearch (Part 2)

VAIBHAV THAKUR
The MetricFire Blog

--

This blog has been written in partnership with MetricFire. If you are planning to run Kubernetes in production you should certainly check them out.

Introduction

In the Part 1 of this series we learnt about the configuring the elastic backend for logging. In this tutorial we will learn about configuring Filebeat to run as a DaemonSet in our Kubernetes cluster in order to ship logs to the Elasticsearch backend. We are using Filebeat instead of FluentD or FluentBit because it is an extremely lightweight utility and has a first class support for Kubernetes. It is best for production level setups.

Deployment Architecture

Filebeat will run as a DaemonSet in our Kubernetes cluster. It will be:

  • Deployed in a separate namespace called Logging.
  • Pods will be scheduled on both Master nodes and Worker Nodes.
  • Master Node pods will forward api-server logs for audit and cluster administration purposes.
  • Client Node pods will forward workload related logs for application observability.

Creating Filebeat ServiceAccount and ClusterRole

--

--