Loki Setup for Kubernetes cluster logging

Prakash Singh
5 min readMay 20, 2020

--

(Just Basic POC)

Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system.

This article will give you briefing about how to setup Loki based log monitoring just for POC. For running this in production may be you have to go through all aspects of its pros and cons and Guidelines.

OverAll Architecture

Here I am giving overall Idea about what we are going to setup

  • Grafana Server(outside k8s)
  • Loki Server (Outside k8s)
  • Promtail(Inside K8s as DaemonSet)

Installing Grafana

Grafana can be installed in two ways, Either you can follow traditional way of installation

  • RPM based
  • On CentOs , RedHat
  • On Ubuntu, Debian
  • Using Docker(our focus will be on this)

For POC purpose most recommended way of Installation is Docker. I will be explaining using docker.
You can follow this Document: https://grafana.com/docs/grafana/latest/installation/docker/

I am using we are running this in server01

docker run -d -p 3000:3000 --name grafana grafana/grafana:master

Installing Loki Server

If it’s just a POC you can run Loki docker container in same machine where you are running grafana or different machine, It is up to you.
Although Loki can be installed as a service also but here we are focusing only on POC and basic setup to understand flow so we will be installing Loki as container only.

I am assuming we are running this in server02

docker run -d -p 3100:3100 --name loki grafana/loki:latest

Note:

  • After running each container check you are able to access respective service or not
  • Enable Firewall accordingly in which port you are exposing your service

Tip:

  • Using below docker compose file also you can install above services, just remove Promtail component from file as we will be running Promtail in different way(If you have already followed above steps juts ignore it right now)

So now we have two Endpoints:

DNS for Loki

I will Map http://server02:3100/ to DNS name loki, so we can access loki server using http://loki:3100/
(will explain you alternative way if make DNS is not possible)

Now it’s time to run prom tail agent. Basically promtail is one of the agent which pushes logs to Loki. We can have multiple scenarios here. May be you wan to push only Kubernetes logs. It all depends upon your Use case.

Promtail Installation

So here we have to install promtail agent in each k8s node so that it can push logs from each node to Loki. We will be running promtail agent as DaemonSet in k8s cluster.
Although we can create Configmap, ServiceAccount, ClusterRole, ClusterRoleBinding to give read access to k8s objects, DaemonSet Yaml files etc. better we will use helm chart already provided to get rid off manually creating each file and installing.

Step1: make sure helm installed in your local.
Follow this document if you want to install helm client https://helm.sh/docs/intro/install/

Step2: Make sure you are connected to Kubernetes cluster it can be GKE or AKS etc.

Step3: Installing tiller so that you can deploy helm charts in k8s cluster
helm init --upgrade

Step4: Add helm repo related to Loki
helm repo add loki https://grafana.github.io/loki/charts

Step5: Now run below command for install promtail in K8S clsuster as DaemonSet
helm upgrade --install promtail loki/promtail --set "loki.serviceName-loki"

While running above one may be you will can get some RBAC related error like below:

To mitigate above Error, please follow below quick steps.

  • Check if you have specific account for tiller, if not create one:
    kubectl created serviceaccount tiller --namespace kube-system
  • Map tiller service account to cluster-admin cluster role(For production setup we have to create new cluster role with specifc permission and then map to tiller serviceaccount)
    kubectl create clusterrolebinding tiller-cluster-admin --clusterrole-cluster-admin --serviceaccount=kube-system:tiller --namespace kube-system
  • Now we have to add this serviceaccount to tiller pod
    kubectl patch deploy tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' --namespace kube-system

Now try below command to install promtail as daemonset

helm upgrade --install promtail loki/promtail --set "loki.serviceName=loki"

While running above command may be you can get below error related to “promtail has no deployed release”

To mitigate this issue use “ — force” flag like below:

helm upgrade --install promtail loki/promtail --set "loki.serviceName=loki" --force

Verify

Now we have run all basic required steps for pushing logs to Loki server. While running helm command to install we will get below output where we can check what components have been created as part of helm chart installation.

After some time all our daemonset pods will be in running state.

Now its time to see where we are mentioning Loki server endpoint to tell promtail that it has to send logs to this Loki server.

Just describe your DaemonSet promtail or get promtail DaemonSet as yaml and see below configuration, which i have blurred.

In my case i have provided below:
client.url=http://loki:300/loki/api/v1/push

i mentioned earlier if you have not and DNS you can update it directly in DaemonSet your IP with port.

Another way to do it by updating HelmChart Values.yaml file
https://github.com/grafana/loki/blob/master/production/helm/promtail/values.yaml
Update ExtracommandLineArgs in above mentioned file

So now what is pending? we need to check is our promtail pods are able to send logs to Loki or not. How to verify that. Basically you can see promtail pod logs by
kubectl logs -f <pod_name> -n <namespace>

You should not bee seeing any 4xx and 5xx or connection time our for Loki Endpoint.

Now it’s time to check are we able to see logs in Grafana.

Login to Grafana portal using http:server01:3000/

Add Loki Datasource and save and test to check connectivity:

Now click one Explore button to explore logs, may you need to understand Loki Query

Now Click on Explore button on Grafana’s sidebar

You can type your queries here and explore logs:

I hope this document will be helpful for you.
More details related to customising configuration can be shared in some other article.

If you have doubts feel free to comment.

--

--