How to get started with logging in Kubernetes so you can get some Holiday rest

LogDNA
manifoldco
Published in
4 min readDec 6, 2018

--

Kubernetes makes it easy to manage containers at scale. It also introduces complexity in the number of moving parts; containers, backend services, and even nodes change constantly.

When the pods are evicted, crashed, deleted or scheduled on a different node, the logs from the container is gone. This is different than logging on traditional servers or virtual machines. When your apps die on your virtual machine, you don’t lose the logs until you delete it. In Kubernetes, it cleans up after itself and the logs will not persist. Understanding the ephemeral nature of default logging on Kubernetes is important because it underlines a need for an actual log management solution, not just ssh into a machine to look at log files. Don’t worry, with LogDNA, that just means typing in just 2 kubectl commands.

Yes. That’s all folks. You can start your holidays now. Or read on if you’d like to look under the hood and understand how logging works in Kubernetes.

Types of logs in Kubernetes:

  1. Node logs, which are generated by nodes and services running on nodes (e.g. the kubelet agent, kube-proxy, and other services)
  2. Component logs, which are generated by containers, Pods, Services, DaemonSets, and other Kubernetes components

1. Node Logs

Each node in a Kubernetes cluster runs services that allow it to host Pods, receive commands, and network with other nodes. The format and location of these logs depend on the host operating system. For example, you can get logs for the kubelet service on a typical Linux server by running journalctl -u kubelet. On other systems, Kubernetes writes to log files in the /var/log directory.

2. Component Logs

Component logs are captured by Kubernetes itself and can be accessed using the Kubernetes API. This is most commonly used with Pods. At the Pod level, each message that an application writes to STDOUT or STDERR is automatically collected by the container runtime and handled by the runtime’s logging driver. Kubernetes reads these logs and appends information such as the Pod name, hostname, and namespace.

For example, the following event is from a standalone Nginx container running in a Pod:

Collecting and Analyzing Kubernetes Logs

You can access Kubernetes logs using multiple methods: using the Kubernetes command line interface (CLI), build your own logging service or by forwarding your logs to a logging service like LogDNA.

Using the Kubernetes CLI

The Kubernetes CLI lets you view logs from any container, Pod, or deployment. For example, let’s view the logs created by the Nginx container we previously deployed. We deployed it as part of a Kubernetes Deployment called nginx-deployment, so we can view logs using the following command:

user@kubernetes-test:~$ kubectl logs nginx-deployment

10.4.0.1 — — [07/Nov/2018:18:36:22 +0000] “GET / HTTP/1.1” 200 168 “-” “Mozilla/5.0” “-”
10.4.0.1 — — [07/Nov/2018:18:37:04 +0000] “POST /index.php HTTP/1.1” 404 570 “-” “Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0)” “-”

The kubectl logs <deployment> command combines all log output from each Pod running in the deployment. If the deployment contains stopped Pods (e.g. crashed Pods), you can view their logs by appending --previous. Logs are handled by the container runtime, which in this case is Docker. Since Docker stores its logs on the node by default, the Pod’s logs are only available as long as the node is available and hasn’t overwritten or deleted the relevant files.

The Hard Way: Fluentd and ElasticSearch

There are a number of different ways to log at the pod or node level, configuring Docker’s logging driver or adding a sidecar container to capture logs. You can find many guides to set up Fluentd, even though some might be 30+ steps. Then, scaling ElasticSearch becomes a challenge as you have to learn how to properly architect the shards and indices and become an expert at ElasticSearch operations.

But the holidays are near and you don’t need to add to your list of work.

The Easy Way: Forward your logs to LogDNA

Here are the two commands you run after you’ve signed up.

# kubectl create secret generic logdna-agent-key --from-literal=logdna-agent-key=YOUR-INGESTION-KEY-HERE

# kubectl create -f https://raw.githubusercontent.com/logdna/logdna-agent/master/logdna-agent-ds.yaml

(Replace YOUR-INGESTION-KEY-HERE with your actual LogDNA Ingestion Key)

When you install LogDNA’s collector agent, it is deployed as a Docker container that automatically collects logs from other containers running on the same node, as well as from the node itself. Since it runs as a regular Docker container, it can be deployed over Kubernetes to multiple nodes. The container is deployed as a DaemonSet, which ensures that all nodes run a single copy of the container. And since the agent streams the logs to LogDNA, it doesn’t take additional resources in the pods and nodes.

You can learn more about deploying the LogDNA over Kubernetes by reading our Kubernetes documentation.

Instead of spending your time scaling the logging infrastructure, you can then focus on scaling your product …or get that much-needed rest in time for the holidays.

To learn more about LogDNA, contact us, visit our website, or sign up for an account through Manifold!

Special thanks to Thu Nguyen and the rest of the LogDNA team for submitting this post for us to publish on their behalf as part of Manifold’s 12 Days of Services.

--

--