OpenTelemetry in Action

Hands-On guide to get started with OpenTelemetry

Magsther
9 min readJul 10, 2023

Introduction

This OpenTelemetry “Hands-On” guide is for you who want to get started with OpenTelemetry.

If you are new to OpenTelemetry, then I suggest that you start with the OpenTelemetry Up and Running post where I covered OpenTelemetry in detail.

OpenTelemetry is beginning to change the observability landscape, by providing a long yearning vendor agnostic way to instrument and collect your data. What in the past often required a propriety agent that ran on your machines can now be handled by the SDKs/APIs and a collector from OpenTelemetry. This way we can decouple the instrumentation from the storage backends, which is great, because that means that we are not tied to any tool, thus avoiding a potential buy in from commercial vendors.

By using OpenTelemetry, you can ask developers ONCE to instrument their code (without having to know where the data will be stored). The telemetry data (logs, metrics and traces) is sent to a collector that you OWN, and from there you can send it to whatever vendor you like. You can even use more than one vendor and compare them, without asking developers to change anything.

You can even use more than one vendor and compare them, without asking developers to change anything.

Getting started with the OpenTelemetry Demo

To see OpenTelemetry in Action, we will use the OpenTelemetry Astronomy Shop Demo application.

Here, I will be using Jaeger which is a very popular open source product to use for analysis and querying, but you can use the OpenTelemetry collector to export telemetry data to multiple observability backends like Lightstep, Grafana, Honeycomb and New Relic by adding them to the exporter file, which tells OpenTelemetry where it should send the data.

I will show further down how this can look; we will sign up for a test account and send data to the above mentioned vendors. After that, we will log in and see the data in each tool’s UI.

What is the OpenTelemetry demo application?

The demo application is an application provided by the OpenTelemetry project and can be used as a showcase to see OpenTelemetry in Action.

The application itself simulates a web shop using currently 15 different services instrumented with more than 10 different programming languages. It uses a load generator (Locust) to continuously sends requests imitating realistic user shopping flows to the frontend.

This application is good for everyone that wants to see OpenTelemetry in Action.

You can run the demo application on Docker and Kubernetes.

The architecture of the application can be illustrated like this:

Service Roles

Scenario

One of the services included in the demo application, is the featureflagservice. This is a CRUD feature flag service that we can use to demonstrate various scenarios like fault injection & how to emit telemetry from a feature flag reliant service.

source

We can browse to the UI of the service on http://localhost:8080/feature/ and enable one or more feature flags to control failure conditions, and then use use a tool of our choice that can help us to diagnose the issue and determine the root cause.

This makes it a perfect example to see OpenTelemetry in action.

Prerequisites

To follow this tutorial, you will need to have Docker installed on your machine. See installation instructions for that here.

Setting up the Demo

The installation instructions are good, so head over to the GitHub repo and clone it to your computer.

git clone https://github.com/open-telemetry/opentelemetry-demo.git

cd opentelemetry-demo/

docker compose up --no-build

Note that if you are on Apple Silicon that you need to use this command:

docker compose build

Start the demo application

Once all the images are built and the containers are started you can access the them via:

Verify that the you can access them via your browser.

Screenshots

Webstore

Grafana

Feature Flags UI

Here, we can enable the adServcieFailure feature flag, which will Generate an error for GetAds 1/10th of the time

Load Generator UI

Jaeger UI

View and Analyse with the Jaeger UI

With the adServcieFailure feature flag enabled, let’s see how we can use Jaeger to diagnose the issue to determine the root cause. Remember, that the service will generate an error for GetAds 1/10th of the time.

Jaeger is usually the first tool you get in contact with when you start getting into the world of Distributed Tracing. With Jaeger, we can visualise the whole chain of events. With this visibility we can easier isolate the problem when something goes wrong.

Let’s view the data in the Jaeger UI in more detail.

Click on Find Traces to see all traces generated.

Let’s now check the adservice service from the dropdown list, to see if we spot any errors.

Here you can see that Jaeger has found the trace which contains errors.

The trace contains a list of spans in a parent-child relation that represent the order of execution along with the time taken by each span.

Click on the trace, to get detailed information from it. You will see that the trace consists of spans where each span represents an operation done by the services.

From the screenshot below, we can see the waterfall view of a trace with spans.

If you click on the span from the adservice service and you will see what caused the error: Resource_Exhausted

We can also use the panel on the left hand side. In the tags field , put in errors=true and you should see the following.

Directed Acyclic Graph (DAG) graph

You can use the Directed Acyclic Graph (DAG) graph to view the dependancies between the microservices.

RED (Request, Error, Duration) metrics

Surfaced in Jaeger UI as the “Monitor” tab, the motivation for this feature is to help identify interesting traces (e.g. high QPS, slow or erroneous requests) without needing to know the service or operation names up-front.

It is essentially achieved through aggregating span data to produce RED (Request, Error, Duration) metrics.

Tips: USE vs RED vs The Four Golden Signals — Most useful metrics to collect

Click on the Monitor tab and select the adservice and you should see the following metrics.

Vendor — Observability backends

Until now, we have used Jaeger, but with OpenTelemetry being supported by every major vendor you are free to use any observability backend that you want.

Here, I’ll sign up for a test account at Lightstep, Grafana Cloud, Honeycomb and New Relic. I will then update the opentelemetry configuration file for the demo application to add the necessary exporters. Once the data is flowing, I’ll login to each vendors UI to view the data.

Lightstep

To get started with Lightstep, you need to have an account. In this demo, I used the free tier account called Community. Once you have signed in, we need to create an API Key in order to send telemetry data to Lightstep. Add the token to the otelcol-config.yml together with the following configuration.

  exporters:
logging:
logLevel: debug
otlp/ls:
endpoint: ingest.lightstep.com:443
headers:
"lightstep-access-token": <lightstep_access_token>

service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/ls]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/ls]

Run docker compose up in the opentelemetry-demo folder.

The data should now be flowing into our backends to Lightstep.

When you login to Lightstep , you should see the something similar to this:

Grafana

To get started with Grafana, you need to have an account. Grafana offers a 14-day trial of their Grafana Pro plan, which I’ll use for this demo. At the start page, we can find our Grafana Stack with information on how to setup and manage different Grafana products.

Click on the details page to find the endpoints. You can use Tempo for traces, Loki for logs and Prometheus for metrics. You will need to generate a Grafana API key in order to send telemetry data to Grafana. After you have generated the key, add the key to the otelcol-config.yml together with the following configuration.

extensions:
basicauth/grafanacloud:
client_auth:
username: ${GRAFANA_INSTANCE_ID}
password: ${GRAFANA_CLOUD_APIKEY}

exporters:
otlphttp/grafanacloud:
auth:
authenticator: basicauth/grafanacloud
endpoint: ${GRAFANA_OTLP_ENDPOINT}

service:
extensions: [basicauth/grafanacloud]
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlphttp/grafanacloud, spanmetrics]
metrics:
receivers: [otlp, spanmetrics]
processors: [batch]
exporters: [prometheus, logging, otlphttp/grafanacloud]
logs:
receivers: [otlp]
processors: [batch]
exporters: [logging]

Run docker compose up in the opentelemetry-demo folder.

The data should now be flowing into our backends to Grafana.

When you login to Grafana , you should see the something similar to this:

Loki — for the Logs

Tempo — for the traces

Click on the blue button to split the screen and display the trace information.

Prometheus — for the metrics

Honeycomb

To get started with Honeycomb, you need to have an account. Honeycomb offers a Free tier, which I’ll use for this demo. Once you have signed in, we need to create a Honeycomb API Key in order to send telemetry data to Honeycomb. Add the token to the otelcol-config.yml together with the following configuration.

exporters:
otlp/honeycomb:
endpoint: "api.honeycomb.io:443"
headers:
"x-honeycomb-team": "<HONEYCOMB_API_KEY>"
"x-honeycomb-dataset": "webstore-metrics"
processors:
attributes:
include:
match_type: strict
services:
- frontend-proxy
actions:
- key: "net.component"
value: "proxy"
action: insert
service:
pipelines:
metrics:
exporters:
- prometheus
- logging
- otlp/honeycomb
traces:
exporters:
- otlp
- logging
- spanmetrics
- otlp/honeycomb
processors:
- attributes
- batch

Run docker compose up in the opentelemetry-demo folder.

The data should now be flowing into our backends to Honeycomb.

When you login to Honeycomb, you should see the something similar to this:

New Relic

To get started with New Relic, you need to have an account. New Relic offers a Free tier, which I’ll use for this demo. Once you have signed in, we need to create a New Relic License Key in order to send telemetry data to New Relic. In addition to the key, you also need to know the which endpoints to use, which you can find here.

Add the token and endpoint to the otelcol-config.yml together with the following configuration.

exporters:
otlp/newrelic:
endpoint: ${NEWRELIC_OTLP_ENDPOINT}
headers:
api-key: ${NEWRELIC_LICENSE_KEY}

service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/newrelic]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlp/newrelic]

Run docker compose up in the opentelemetry-demo folder.

The data should now be flowing into our backends to New Relic.

When you login to New Relic, you should see the something similar to this:

You can find more examples in the OpenTelemetry demo repository

Conclusion

By now you should have an idea what the possibilities are with OpenTelemetry and understand how it is changing the Observability landscape.

If you find this helpful, please click the clap 👏 button and follow me to get more articles on your feed.

--

--