Simple OpenTelemetry setup in a Kubernetes environment

A getting-started guide for running OpenTelemetry in a Kubernetes cluster.

Ruturaj Shitole
Incerto Technologies
5 min readDec 9, 2023

--

figure 1: A high level architecture of the system
fig-1: A high-level architecture of the system

In this article, I’ll take you through a simple setup for starting with OpenTelemetry in a Kubernetes environment. It is intended to be a starting point for those who want to start collecting telemetry data in a Kubernetes cluster using OpenTelemetry. As shown in Fig. 1, we will set up two components in the Kubernetes cluster, the Application and the OpenTelemetry collector. It will work as shown below:

  1. The user interacts with the Application via exposed endpoints
  2. The application generates telemetry data and exports it to the OpenTelemetry collector
  3. The OpenTelemetry collector then processes the telemetry data and displays it in the console as logs

The source code can be found in my GitHub repository: rutu-sh/otel-k8s-experiments.

Prerequisites

You’ll need to have the following installed and configured in your system:

  1. Python
  2. Docker or Podman
  3. Minikube

1. Auto-Instrumenting a Python Application

In order to make a system observable, it must be instrumented: That is, code from the system’s components must emit traces, metrics, and logs. -OpenTelemetry docs

A system’s instrumentation can be done either manually or automatically. Manual instrumentation requires code modifications while auto-instrumentation uses OpenTelemetry tools to generate telemetry data while running the application. In this article, we’ll use auto-instrumentation for simplicity.

A simple FastAPI Python Application can be written as follows:

code-1 (main.py): A simple FastAPI Application

To run this application, we can do:

python3 main.py

To auto-instrument this application, we’ll need to install the OpenTelemetry dependencies for Python, as shown here, and run:

OTEL_SERVICE_NAME=demo-application \
OTEL_TRACES_EXPORTER=console \
OTEL_METRICS_EXPORTER=console \
opentelemetry-instrument \
python3 myapp.py

To use this application in a Kubernetes setup, it must be Dockerized first. I have created a similar FastAPI application shown in the GitHub repository. The application is Dockerized and pushed to DockerHub, with the image name rutush10/otel-autoinstrumentation-fastapi-simple-app:0.0.4.The same image will be used throughout this demo.

2. Creating an OpenTelemetry Collector Config

The collector config defines how the telemetry data (traces, metrics, logs) is processed within the OpenTelemetry collector. It has four main components:

  1. Receivers: Define the source of telemetry data
  2. Processors: Define the processing of the telemetry data
  3. Exporters: Define the destination for the telemetry data
  4. Connectors: Connect multiple processing pipelines

Each telemetry data point (traces, metrics, logs) will have a pipeline defined within the OpenTelemetry collector using the following components:

  • Receivers:
    otel: Configured to receive telemetry data using gRPC.
  • Processors:
    batch: Processes data in batches
  • Exporters:
    logging: Logs the telemetry data to the console

The flow of the data will happen in the following order:

  1. Telemetry data is received by the otel receiver on port 4317 and forwarded to the batch processor
  2. The batch processor forwards the data in chunks to the logging exporter
  3. The logging exporter logs the data to the console in the OTLP format
fig-2: Flow of the telemetry data in the collector

The pipelines for telemetry data are defined within the collector-config YAML as shown below:

code-2 (data/otel-collector-config.yaml): OpenTelemetry Collector Config

3. Creating Kubernetes manifests

The system’s architecture will look as shown below. Here I will provide a short description of the manifests, a detailed explanation of the architecture and the manifests is given here, I would suggest going through it if you face difficulties in understanding any configurations.

fig-3: Architecture of the System

Following is the structure of the k8s directory containing the Kubernetes manifests:

└── k8s
├── configmap.yaml
├── data
│ └── otel-collector-config.yaml
├── deployment.yaml
├── kustomization.yaml
├── namespace.yaml
└── service.yaml

Components

Namespaces:
1. opentelemetry-demo
: Defines the namespace containing the k8s resources.

code-3 (k8s/namespace.yaml): Namespace specification

ConfigMaps:

  1. single-app-single-collector: Defines the environment variables required for the application.
code-4 (k8s/configmap.yaml): ConfigMap for the Application

2. otel-collector-config: This ConfigMap is created using Kustomization functionalities, and is mounted to the opentelemetry-collector container. Using Kustomization allows separating the otel-collector-config.yaml (defined in code-2) from the Kubernetes manifests and provides more readability.

code-5 (k8s/kustomization.yaml): Kustomization YAML

Deployments:

  1. single-app-single-collector: To manage the deployment of the FastAPI application. The application container is exposed on port 8000.
  2. opentelemetry-collector: To manage the deployment of the OpenTelemetry collector. The container is exposed on port 4317 (for gRPC).

Don’t forget to edit the resource constraints according to your system specifications.

code-7 (k8s/deployment.yaml): Deployment specifications

Services:

  1. single-app-single-collector: This is a NodePort service, that redirects the requests on the node’s port to port 8000 on the pod.
  2. otel-collector: This is a ClusterIP service that exposes the collector within the cluster. This service maps the requests to port 4317 on the pod.
code-7 (k8s/services.yaml): Service speicifications

Once you’re done configuring all the manifests, run the following command from the k8s directory:

kubectl apply -k .

To get the application URL run the following:

minikube service --namespace opentelemetry-demo single-app-single-collector --url

Make requests to the port 30000 on the URL, this would generate the application telemetry. You can load the swagger docs at <application-url>:30000/docs and make requests.

5. Visualizing

Launch the kubernetes dashboard by following this, you should see similar results (select the opentelemetry-demo namespace from the dashboard dropdown).

The Dashboard should look like this:

fig-4: Kubernetes Dashboard

You can check the application logs by navigating to the application pod. The logs should show the auto-instrumented telemetry data:

fig-5: Application Pod Logs

The telemetry data is also exported to the collector via the otel-collector service, which then displays it as shown below:

fig-6: Collector Pod Logs

Conclusion

I hope you were able to follow through with the article and configure your own Kubernetes cluster with OpenTelemetry. For more details visit the GitHub repository. Additionally, you can check out the docs here to write your own Python code and have it ready to be used within a Kubernetes deployment.

--

--

Incerto Technologies
Incerto Technologies

Published in Incerto Technologies

We are a team of developers and problem solvers dedicated to enhancing your business.

Ruturaj Shitole
Ruturaj Shitole

Written by Ruturaj Shitole

Software Engineer working at the intersection of business, industry, and technology.

No responses yet