Simple OpenTelemetry setup in a Kubernetes environment
A getting-started guide for running OpenTelemetry in a Kubernetes cluster.
In this article, I’ll take you through a simple setup for starting with OpenTelemetry in a Kubernetes environment. It is intended to be a starting point for those who want to start collecting telemetry data in a Kubernetes cluster using OpenTelemetry. As shown in Fig. 1, we will set up two components in the Kubernetes cluster, the Application and the OpenTelemetry collector. It will work as shown below:
- The user interacts with the Application via exposed endpoints
- The application generates telemetry data and exports it to the OpenTelemetry collector
- The OpenTelemetry collector then processes the telemetry data and displays it in the console as logs
The source code can be found in my GitHub repository: rutu-sh/otel-k8s-experiments.
Prerequisites
You’ll need to have the following installed and configured in your system:
1. Auto-Instrumenting a Python Application
In order to make a system observable, it must be instrumented: That is, code from the system’s components must emit traces, metrics, and logs. -OpenTelemetry docs
A system’s instrumentation can be done either manually or automatically. Manual instrumentation requires code modifications while auto-instrumentation uses OpenTelemetry tools to generate telemetry data while running the application. In this article, we’ll use auto-instrumentation for simplicity.
A simple FastAPI Python Application can be written as follows:
To run this application, we can do:
python3 main.pyTo auto-instrument this application, we’ll need to install the OpenTelemetry dependencies for Python, as shown here, and run:
OTEL_SERVICE_NAME=demo-application \
OTEL_TRACES_EXPORTER=console \
OTEL_METRICS_EXPORTER=console \
opentelemetry-instrument \
python3 myapp.pyTo use this application in a Kubernetes setup, it must be Dockerized first. I have created a similar FastAPI application shown in the GitHub repository. The application is Dockerized and pushed to DockerHub, with the image name rutush10/otel-autoinstrumentation-fastapi-simple-app:0.0.4.The same image will be used throughout this demo.
2. Creating an OpenTelemetry Collector Config
The collector config defines how the telemetry data (traces, metrics, logs) is processed within the OpenTelemetry collector. It has four main components:
- Receivers: Define the source of telemetry data
- Processors: Define the processing of the telemetry data
- Exporters: Define the destination for the telemetry data
- Connectors: Connect multiple processing pipelines
Each telemetry data point (traces, metrics, logs) will have a pipeline defined within the OpenTelemetry collector using the following components:
- Receivers:
otel: Configured to receive telemetry data using gRPC. - Processors:
batch: Processes data in batches - Exporters:
logging: Logs the telemetry data to the console
The flow of the data will happen in the following order:
- Telemetry data is received by the
otelreceiver on port 4317 and forwarded to thebatchprocessor - The
batchprocessor forwards the data in chunks to theloggingexporter - The
loggingexporter logs the data to the console in the OTLP format
The pipelines for telemetry data are defined within the collector-config YAML as shown below:
3. Creating Kubernetes manifests
The system’s architecture will look as shown below. Here I will provide a short description of the manifests, a detailed explanation of the architecture and the manifests is given here, I would suggest going through it if you face difficulties in understanding any configurations.
Following is the structure of the k8s directory containing the Kubernetes manifests:
└── k8s
├── configmap.yaml
├── data
│ └── otel-collector-config.yaml
├── deployment.yaml
├── kustomization.yaml
├── namespace.yaml
└── service.yamlComponents
Namespaces:
1. opentelemetry-demo: Defines the namespace containing the k8s resources.
ConfigMaps:
- single-app-single-collector: Defines the environment variables required for the application.
2. otel-collector-config: This ConfigMap is created using Kustomization functionalities, and is mounted to the opentelemetry-collector container. Using Kustomization allows separating the otel-collector-config.yaml (defined in code-2) from the Kubernetes manifests and provides more readability.
Deployments:
- single-app-single-collector: To manage the deployment of the FastAPI application. The application container is exposed on port 8000.
- opentelemetry-collector: To manage the deployment of the OpenTelemetry collector. The container is exposed on port 4317 (for gRPC).
Don’t forget to edit the resource constraints according to your system specifications.
Services:
- single-app-single-collector: This is a NodePort service, that redirects the requests on the node’s port to port 8000 on the pod.
- otel-collector: This is a ClusterIP service that exposes the collector within the cluster. This service maps the requests to port 4317 on the pod.
Once you’re done configuring all the manifests, run the following command from the k8s directory:
kubectl apply -k .To get the application URL run the following:
minikube service --namespace opentelemetry-demo single-app-single-collector --urlMake requests to the port 30000 on the URL, this would generate the application telemetry. You can load the swagger docs at <application-url>:30000/docs and make requests.
5. Visualizing
Launch the kubernetes dashboard by following this, you should see similar results (select the opentelemetry-demo namespace from the dashboard dropdown).
The Dashboard should look like this:
You can check the application logs by navigating to the application pod. The logs should show the auto-instrumented telemetry data:
The telemetry data is also exported to the collector via the otel-collector service, which then displays it as shown below:
Conclusion
I hope you were able to follow through with the article and configure your own Kubernetes cluster with OpenTelemetry. For more details visit the GitHub repository. Additionally, you can check out the docs here to write your own Python code and have it ready to be used within a Kubernetes deployment.

