How To Log NestJS Applications in a Distributed System with EFK Stack in Kubernetes — Part 3

Collecting Logs with fluentbit and showing them in Kibana

Itchimonji
CP Massive Programming
8 min readJan 23, 2023

--

How To Log NestJS Applications in a Distributed System with EFK Stack

In one of my latest articles, I explained the importance of Centralized Logging. I demonstrated how you could realize custom logging in a NestJs application using winston and described the central role of fluentbit.

In this article I want to show you how you can collect custom and stdout logs, push them into an Elasticsearch database, and visualize them in Kibana.

Creating a Kubernetes Cluster for Local Use

To become more experienced with Kubernetes and improve our workflow, installing a local Kubernetes environment is key. For this, I use kind in most cases. Check out one of my articles to become familiar with kind.

We can use the following configuration file to configure a local Kubernetes cluster with one control-plane and three worker-nodes.

# kind.config.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: nestjs-logging
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker

Now we can start the Kubernetes cluster through kind with the following command.

kind create cluster --config=kind.config.yaml

After initialization of the cluster, the .kubeconfig will automatically be appended to the profile directory, so we can run kubectl commands like kubectl get pods -A.

Pods after creating a K8s cluster with kind

After we finish our work, we can delete the cluster with the following command.

kind delete cluster --name nestjs-logging

Creating a local Kubernetes cluster will be the basis for this article. We have to ensure that Docker and kind are installed, as well as kubectl CLI and Helm.

Collecting stdout logs with EFK Stack in Kubernetes

Collecting stdout logs is very simple with EFK and Helm, because the helm charts are preconfigured. So, elasticsearch, fluentbit, and Kibana are already connected out of the box.

After we have created a local Kubernetes cluster with kind, we can deploy the EFK charts on it using the following commands.

helm repo add elastic https://helm.elastic.co
helm repo update

helm install elasticsearch elastic/elasticsearch --version 7.17.3
helm install kibana elastic/kibana --version 7.17.3
helm install fluent-bit fluent/fluent-bit --version 0.21.1

After a few seconds, we can see running pods in the default namespace with kubectl get pods.

EFK Pods

To access the Kibana UI, run the following command to forward Kibana’s default port 5601 to port 8080 for local use.

kubectl port-forward service/kibana-kibana 8080:5601

Now we can hit http://localhost:8080/ in our browser to access Kibana.

After opening the sidebar and navigating to Management > Stack Management > Kibana > Index Pattern > Create index pattern, we can create an Index Pattern to filter incoming logs.

Kibana Index Pattern Overview

We could create a wildcard with logstash-* to view all incoming logs.

Wildcard Index Pattern

After this, we need to navigate to Analytics > Discover. There we can see all the logs of our local Kubernetes cluster.

Kibana Discover Page

To filter by certain criteria, we can either use the filter bar on the left or familiarize ourselves with KQL to use the search bar.

For example, to evaluate the logs of a particular container, we can use kubernetes.container_name : “fluent-bit” as KQL. This is especially useful when there are multiple pods of the same container, as in a DaemonSets.

Use KQL to filter logs

To delete all created dependencies, we can run the following command. For the custom-log approach below, we will build a separate helm chart.

helm uninstall fluent-bit
helm uninstall kibana
helm uninstall elasticsearch

Collecting custom logs with EFK Stack in Kubernetes with a Sidecar

The other way is using a sidecar pattern and running a log-forwarding container next to the application container within the same pod. We need to use this pattern because winston writes its logs to the filesystem. Also, sidecars extend the functionality of a main container without changing it. The application logs will be transferred to the Elasticsearch database via a fluentbit sidecar container.

Source: https://kubernetes.io/docs/concepts/cluster-administration/logging/

The logs of the main container are shared with the sidecar container via an emptyDir Volume.

apiVersion: apps/v1
kind: Deployment
# ...
spec:
template:
spec:
containers:
- name: main-container
# ...
volumeMounts:
- name: log-volume
mountPath: /usr/app/logs
- name: sidecar-container
# ...
volumeMounts:
- name: log-volume
mountPath: /usr/app/logs
# ...
volumes:
- name: log-volume
emptyDir: { }

Deploy the EFK Stack and two NestJS microservices

After we have created a local Kubernetes cluster with kind (see above), we can use a custom helm chart to deploy the EFK Stack and two NestJS microservices that generate some custom logs. The deployment can be found here.

To install this chart we need to run the following command.

helm upgrade --install efk efk

Now many different pods get spawned.

Pod overview with K9s

The connections between these microservices are shown in this architecture overview.

System Architecture

So, our frontend and backend service write logs to /usr/apps/logs in the filesystem. The task of our sidecar is to take these logs and send them on. For this we use a simple fluentbit container.

apiVersion: apps/v1
kind: Deployment
# ...
spec:
template:
spec:
containers:
- name: main-container-with-winston
# ...
volumeMounts:
- name: log-volume
mountPath: /usr/app/logs
# ...
- name: fluentbit
image: "fluent/fluent-bit:2.0.8-debug"
ports:
- name: metrics
containerPort: 2020
protocol: TCP
env:
- name: FLUENT_UID
value: "0"
volumeMounts:
- name: config-volume
mountPath: /fluent-bit/etc/
- name: log-volume
mountPath: /usr/app/logs
volumes:
- name: log-volume
emptyDir: { }
- name: config-volume
configMap:
name: fluentbit-sidecar

Like a fluenbit DaemonSet the container needs a configuration mounted via a ConfigMap.

# sidecar.configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
name: fluentbit-sidecar
data:
fluent-bit.conf: |
[SERVICE]
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_PORT 2020
Flush 1
Daemon Off
Log_Level warn
Parsers_File parsers.conf

[INPUT]
Name tail
Path /usr/app/logs/*.log
multiline.parser docker, cri
Tag custom.*
Mem_Buf_Limit 300MB
Skip_Long_Lines On

[FILTER]
Name parser
Parser docker
Match custom.*
Key_Name log
Reserve_Data On
Preserve_Key On

[FILTER]
Name modify
Match *

[OUTPUT]
Name es
Match *
Host elasticsearch-master
Logstash_Format On
Logstash_Prefix fluent_bit
Retry_Limit False

parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %d/%b/%Y:%H:%M:%S %z
Decode_Field_As escaped_utf8 log do_next
Decode_Field_As json log

As we can see, the fluentbit container observes Paths usr/app/logs/*.log.

Very import is changing the [Output.Host] to the host of our needs. In this case, the host represents the Kubernetes Service of Elasticsearch with Port 9200.

We can further customize the output plugin by following the official documentation.

We could add more labels or label_keys. Or we could add a filter to add custom labels or service names.

[FILTER]
Name modify
Match *
Add service_name database-service

After all pods are initialized, we can portforward the Kibana container from default port 5601 to port 8080 for local use.

# Portforward
kubectl port-forward service/efk-kibana 8080:5601

After opening the sidebar and navigating to Management > Stack Management > Kibana > Index Pattern > Create index pattern, we can create an Index Pattern to filter incoming logs. We could create a wildcard with fluent_bit-* to view all incoming logs.

Create an Index Pattern for the Sidecars

After this, we need to navigate to Analytics > Discover. There we can see the custom logs of our NestJS microservices.

Kibana Overview of Custom Logs

We can also generate some error logs with the frontend and backend apps. For this, we need to portfoward the port of the frontend app to get access via localhost.

# portforward
kubectl port-forward service/efk-frontend-service 8081:80
# Open UI
open http://localhost:8081

This application gets some information about Star Wars from the backend app. To cause some errors, we need to hit the Cause an error button.

After refreshing the query in Kibana, we can see the error logs.

Error logs

Note, you can create custom dashboards in Analytics > Dashboard to show only data fields with necessary information.

Kibana Custom Dashboard

Kibana has so much going for it, which makes centralized logging far easier. All information about this can be found on the documentation page.

Conclusion

Logging has a central role in distributed systems, and in case of system failures, we want to have an overview to see which applications generate certain messages.

Fluentbit, Elasticsearch, and Kibana help us to generate this approach. With fluentbit we have the possibility to customize our logs via the output plugin. We can add additional labels and tags.

But consider that audit logs can be very noisy, and it can be very expensive to log all actions. For this, we can generate custom logs collected via a sidecar to fine-tune this approach for our environment.

Thanks for reading! Follow me on Medium, Twitter, or Instagram, or subscribe here on Medium to read more about DevOps, Agile & Development Principles, Angular, and other useful stuff. Happy Coding! :)

--

--

Itchimonji
CP Massive Programming

Freelancer | Site Reliability Engineer (DevOps) / Kubernetes (CKAD) | Full Stack Software Engineer | https://patrick-eichler.com/links