Introducing Kubectl Flame: Effortless Profiling on Kubernetes

Eden Federman
The Startup
Published in
3 min readAug 17, 2020

What is Profiling?

Profiling is the act of analyzing the performance of applications in order to improve poorly performing sections of code.
One of the most popular ways to visualize a profile and quickly identifying performance issues is by generating a Flame Graph.

Flame graph of a Java API application based on the Spring framework

The y-axis is stack depth, and the x-axis spans the sample population. Each rectangle is a function, where the width shows how often it was present in the profile. The ordering from left to right is unimportant (the stacks are sorted alphabetically).

The Problem: Profiling on Kubernetes

Profiling is a non-trivial task. Most profilers have two main problems:

  • Require application modifications. Usually, either by adding flags to the execution command or by importing some profiling library into your code.
  • Profiling in production is typically avoided due to the significant performance hit during the profiling process.

Choosing the right profiler may solve those problems, but it requires research and usually depends on the programming language and the operating system.

Profiling is even harder when performed on applications running inside a Kubernetes cluster. A new container image, which includes the profiling modifications, needs to be deployed instead of the currently running container. In addition, some performance issues may disappear when an application restarts, which makes debugging difficult.

The Solution: kubectl flame

Kubectl flame is a kubectl plugin that makes profiling applications running in Kubernetes a smooth experience without requiring any application modifications or downtime. Also, kubectl flame aims to be production-friendly by minimizing the performance hit.

The plugin currently supports JVM based languages (additional languages support is coming soon!).

Usage

kubectl flame in action

Profiling Kubernetes Pod

To profile pod mypod for 1 minute and save the flame graph as /tmp/flamegraph.svg run:

kubectl flame mypod -t 1m -f /tmp/flamegraph.svg

Profiling Alpine based container

Profiling alpine based containers require using — alpine flag:

kubectl flame mypod -t 1m -f /tmp/flamegraph.svg --alpine

Profiling sidecar container

Pods that contain more than one container require specifying the target container as an argument:

kubectl flame mypod -t 1m -f /tmp/flamegraph.svg mycontainer

How it works

Kubectl flame uses the most performant profiler for every supported language. A vital requirement of a supported profiler is being able to profile without any changes to the target process. The profiling flow is started by launching a profiler container on the same node as the target container. Most profilers will need to share some resources with the target container:
PID namespace sharing is enabled by setting hostPID to true.
Filesystem sharing is enabled by mounting /var/lib/docker and querying overlayFS.
The current JVM support is based on async-profiler.

kubectl flame overview

Future Development

The following features are already in progress and should be available in the next versions of kubectl flame:

  • Golang support via eBPF profiling
  • Auto-detection of target container programming language
  • Python support
  • Support Kubernetes clusters not based on Docker as a container runtime (Kind)

Summary

Performance-related issues are among the hardest bugs to solve. Next time you face performance problems in your production system, consider using kubectl flame to identify the root cause quickly.

The kubectl flame source code is available on this Github repository.
Feel free to reach out or submit pull requests if you have any suggestions.

--

--