Forwarding Kubernetes Container’s Logs to Elasticsearch with Fluent Bit and Showing Logs with Kibana

Ertuğrul Gamgam
Trendyol Tech
Published in
5 min readDec 14, 2019


In this one, I mention how a container forwards its logs to Elasticsearch with Fluent Bit.

What is Fluent Bit?

Fluent Bit is a tool that processes and forwards logs. It processes data and forwards them to output. As you can see in the picture, it supports many kinds of inputs and outputs.

The main topic that I explain in this story is how to collect logs from log files and how to direct them to Elasticsearch.

Why should I prefer to use logs forwarder even I can get my application’s logs in any way?

The Twelve-Factor App Methodology explains that perfectly. This methodology shows us how to build our applications in the modern era. According to Twelve-Factor methodology, forwarding logs and storage them are not supposed to be a part of our applications. The only responsibility of our applications must be sent logs to stdout. After that, these logs must be written in a file by the execution environment and deliver them to the final destination by log forwarders. This log management is going to give us pretty much advantages. Let’s say we decided to send out applications logs to another final destination instead of Elasticsearch. In this kind of situation, the only thing we need to do is make some changes in the output config of our log forwarders. Then, our applications are not going to realize that. With using a log forwarder, we provide to our applications not being bothered by unnecessary responsibilities.

Why Fluent Bit?

Of course, Every tool has advantages and disadvantages. The ability to work in Kubernetes environment with perfect harmony, being light-weight, and not to require complex configs are the main reasons to choose Fluent Bit.

From this part of our story, I focus on three main topics.

  1. Sending Application Logs to Stdout as JSON.

1.1 JSON Logging with ASP.NET CORE 3.0

1.2 JSON Logging with Spring Boot 2.2.1

2. Running Fluent Bit On Kubernetes and Forwarding Logs to Elasticsearch

3. Showing Logs with Kibana

1) Sending Application Logs to Stdout as JSON

Logs must be in JSON format to index them on Elasticsearch. Since ASP.NET Core and Spring Boot are both popular frameworks, I explain this by creating the REST API project using them.

1.1) JSON Logging with ASP.NET CORE 3.0

After I tried some libraries to show logs as JSON, I decided to use NLog.

Firstly Let’s add necessary packages to .csproj file.

We need to attach the item group to .csproj file for adding nlog.config to bin folder.

Program.cs must look like this.

We ignore existed loggers with {ConfigureLogging((context, builder) => { builder.ClearProviders(); })}. If we don’t do this, Loggers from .Net and NLog are both going to log on console.

Let’s create nlog.config file in the main directory of our project.

Of course, you can add attributes to logs as you wish. That one was just for this example.

You can benefit from the link below to add different attributes.

You can reach source code from here.

1.2) JSON Logging with Spring Boot 2.2.1

Let’s add this dependency to pom.xml file to show logs in JSON format.

We need to add info of HTTP request to MDC by a filter when a request approach to API. Don’t worry. We don’t need to code for this because logback gives us a filter named MDCInsertingServletFilter.

The only thing we need to do is register for this filter.

We are going to use an interceptor to log HTTP requests. MDCInsertingServletFilter doesn’t add request header to MDC. So We should add headers that we want to log to MDC in the interceptor.

The custom interceptor must be registered.

Now, let’s add a config file named logback-spring.xml to the resources folder. Spring boot is going to use this file for log configs.

We reach values that existed in MDC by specifying a key with %X{…}

You can reach source code from here.

2) Running Fluent Bit On Kubernetes and Forwarding Logs to Elasticsearch

Let’s create a new namespace named logging on Kubernetes

By creating ServiceAccount, we are creating an identity for Fluent Bit pods, which is we are going to run.

We need to create ClusterRole, which is necessary to read logs on node by Fluent Bit pods.

Let’s bind ClusterRole to the ServiceAccount.

We need to crate ConfigMap, which belongs to Fluent Bit. Here are some configs that I want to mention it. We are specifying the path of the log file by using path property, which is in the input.

Kubernetes is mapping log files to /var/log/containers. Container log files named by using information like pod id, container name while they are getting mapped to /var/log/containers.

The other important point here is specifying the input name as Tail. Tail plugin provides to read all log files by checking regex in path property.

Name property of output should be es to specify output as Elesticsearch.

Let’s define an index on Elasticsearch by using Lostash_Prefix.

Finally, Kubernetes should run Fluent Bit as Deamon Set. With Deamon Set, Kubernetes starts a Fluent Bit pod on every node. Pods of deployment’s logs that we want to reach may run on any node. Therefore Kubernetes run Fluent Bit as Deamon Set.

Besides this, we need to specify the host of Elasticsearch as an environment variable.

3) Showing Logs with Kibana

Firstly We should create an index pattern on Kibana. To do this, let’s click on Management->Index Pattern->Create Index pattern and defining index pattern.

After we choose the index pattern from the Discovery section, we see logs.