The OpenTelemetry Demo Revisited (Again)
Sending OTel Demo Data to Dynatrace
If you follow my work, you know that I’ve written about the OpenTelemetry (OTel) Demo a number of times over the last few years. I’ve always been a huge fan of it. Whenever folks ask me, “What’s the best way to get started with OTel?” I always point them to the Demo. I love it because it showcases the power of OpenTelemetry with relatively low effort. (I say relatively, because let’s face it…there’s always something that seems to go wrong whenever we try things out for ourselves for the first time, amirite? 🙃)
For those unfamiliar with the OTel Demo here’s the elevator pitch. The Demo is a multi-microservice application written in a number of different languages, and is instrumented with OpenTelemetry. By default, the Demo sends OTel data to a number of open source back-ends: OpenSearch for logs ingest, Prometheus for metrics ingest, and Jaeger for traces ingest. It also leverages an open source version of Grafana for dashboarding and visualizing those three signals under one roof, so to speak.
OTel Demo & Dynatrace
Nothing stops you from configuring the Demo to send data to additional back-ends. In fact, many Observability vendors have forked the OTel Demo to showcase just that. As you may have guessed, Dynatrace also has a fork of the OTel Demo.
As a newbie to Dynatrace, I have spent some exploring how to send OpenTelemetry (OTel) data Dynatrace using some oldie but goodie examples of mine. The natural next stop for me was to configure the OpenTelemetry Demo to send OTel data to Dynatrace.
If you’re interested to know how I went about it, then you’re in the right place! Let’s get started!
Tutorial
I created a fork of the OpenTelemetry Demo repo, and within that fork, I created a special branch with the configurations needed for this tutorial. Note that the main
branch of my fork does NOT have any of the special configs, so don’t use that if you plan on following this tutorial.
You can check out my branch here.
NOTE: By the time you read this, the OTel Demo may be a bit ahead of the snapshot that I have in my little branch. Such is the nature of an active project like OpenTelemetry. But don’t panic, my friend! This tutorial should give you a pretty good idea of how things work, and it should give you the tools to explore the Demo as it continues to evolve. ✌️
So, what’s special about my little branch?
- I created a Development (Dev) Container for running the OTel Demo. It not only includes Docker, which you need to run the Demo, it also includes language runtimes for the various languages that the Demo is written in. You know…in case you want to do any development on the Demo yourself, if you’re feeling adventurous. If you follow my work, you know that I am a huge fan of Dev Containers. 🤘
- There are additional configurations for pointing the OTel Demo’s OTel Collector to send OTel data to Dynatrace, which we’ll get into shortly.
Pre-requisites:
- A Dynatrace account and access token. Learn how to get a trial account and generate an access token here.
- Docker or Podman or equivalent
- Dev Containers plugin for VSCode
- Dev Containers CLI (grab it here or here)
The Dev Container part is optional, but I recommend it just because it gives you a pre-configured environment to run the Demo.
Here we go!
1- Clone the repo
Start by cloning the repo, and switch over to the otel-dt-devcontainers
branch.
git clone git@github.com:avillela/opentelemetry-demo.git
cd opentelemetry-demo
git checkout otel-dt-devcontainers
2- Build and run the Dev Container
NOTE: If Dev Containers aren’t your jam, you can go ahead and skip this step.
Next, build and run your Dev Container.
devcontainer build --no-cache
devcontainer open
The build and open steps will take a few minutes when you run them for the first time.
You only need to build the Dev Container the first time you run the Demo. After that, you just need to run devcontainer open
any time you want to run the Demo. That is, unless devcontainer.json
has changed, in which case you should rebuild.
3- Edit the Collector config file
The OpenTelemetry Demo was designed with flexibility and modularity in mind. This makes it easy for you to easily fork the repo and include configurations to whatever backend you want, without causing mega Git conflict headaches when it’s time to update the fork with the latest upstream source. And believe me, you’ll want to update from the upstream source regularly, because the Demo is a VERY active repo. 🤓
Keeping that in mind…you’ll notice that there are two Collector configuration files:
otelcol-config.yml
otelcol-config-extras.yml
The otelcol-config.yml
file contains the base Collector configurations for receivers, processors, connectors, exporters, and pipelines. As I mentioned earlier, by default, it exports traces to Jaeger, metrics to Prometheus, and logs to OpenSearch.
If you would like to override any of the OTel Collector base config file values, you would do so in otelcol-config-extras.yml
. Don’t touch otelcol-config.yml
. For example, if you would like to send traces to both Jaeger and another Observability back-end, you would configure that Observability back-end in this file. By default, this file is empty.
If you open up otelcol-config-extras.yml in my example repo, you’ll notice that it’s not empty, because I’ve pre-configured it just for you. (You’re welcome. 😜) Let’s take a look at it.
The file is located under src/otelcollector/otelcol-config-extras.yml
:
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# extra settings to be merged into OpenTelemetry Collector configuration
# do not delete this file
## Example configuration for sending data to your own OTLP HTTP backend
## Note: the spanmetrics exporter must be included in the exporters array
## if overriding the traces pipeline.
##
exporters:
otlphttp/dt:
endpoint: "https://${DT_ENV_ID}.${DT_ENV_SUFFIX}/api/v2/otlp"
headers:
Authorization: "Api-Token ${DT_TOKEN}"
debug:
verbosity: detailed
processors:
cumulativetodelta:
service:
pipelines:
traces:
exporters: [spanmetrics, otlp, otlphttp/dt, debug]
metrics:
processors: [cumulativetodelta, batch]
exporters: [otlphttp/dt, otlphttp/prometheus, debug]
logs:
exporters: [otlphttp/dt, opensearch, debug]
Note that I:
- Added an otlphttp exporter called
otlphttp/dt
, to send OTel data to Dynatrace - Set the debug exporter verbosity to
detailed
- Added cumulativetodelta processor for metrics (needed for Dynatrace metrics ingest)
- Updated
pipeline
definitions to includeotlphttp/dt
(trace, metrics, and logs pipelines) andcumulativetodelta
(metrics pipeline)
You can learn more about Dynatrace-specific Collector configurations here.
You may have also noticed that otelcol-config-extras.yml
references the following environment variables: DT_ENV_ID
, DT_ENV_SUFFIX
, and DT_TOKEN
. How do these get passed in? Great question! Read on!
4- Configure the environment variables
You can configure the DT_ENV_ID
, DT_ENV_SUFFIX
, and DT_TOKEN
environment variables in a file called docker-compose.override.yml
. Just as otelcol-config-extras.yml
overrides values in otelcol-config.yml
, docker-compose.override.yml
overrides values in docker-compose.yml
.
You might be wondering why you can’t see that file in my example repo. Two reasons:
- That file is in
.gitignore
, so you’ll never be able to commit it to version control. - It contains app-specific configs, including your Dynatrace access token, which you definitely don’t want in version control.
And with that in mind, let’s create docker-compose.override.yml
.
touch docker-compose.override.yml
And then let’s populate it, like this:
services:
otelcol:
environment:
- DT_ENV_ID=<your_dynatrace_tenant>
- DT_TOKEN=<your_dynatrace_token>
- DT_ENV_SUFFIX=live.dynatrace.com
Where:
DT_ENV_ID
is your Dynatrace tenantDT_TOKEN
is your Dynatrace access tokenDT_ENV_SUFFIX
is your environment suffix, which is already set tolive.dynatrace.com
. I added this environment variable because I also have access to a couple of tenants that reside test environments under different suffixes. It’s safe to say that this doesn’t apply to you, so don’t update this value.
Learn how to find your DT_ENV_ID
and DT_TOKEN
values here.
5- Run the Demo!
Now that you’ve got everything configured, it’s time to run the Demo!
docker compose up
This may take a while the first time around, because Docker Compose needs to pull all of the service images from the GitHub Container Registry (GHCR).
Once the app starts running, you’ll see output like this in the console:
Give it a few minutes, and then try to access the Demo by going to http://localhost:8080. If you see this pop-up from VSCode (only if you’re running it in the Dev Container), then it’s safe to say that the front-end is ready to go:
And once you navigate to http://localhost:8080, you’ll see this:
6- See data in Dynatrace
Once the app has been running for a few minutes, log into Dynatrace to check out your traces, logs, and metrics.
I’m not going in-depth on how to navigate the Dynatrace UI because a) you’re probably tired of reading and b) there are already some great videos on the Dynatrace YouTube channel that explain this stuff pretty well, so I encourage you to go there for a more in-depth look, if that tickles your fancy.
Below are some screenshots of OTel Demo data in Dynatrace.
Final Thoughts
I’m always impressed by how configurable and extensible the OTel Demo is. Its design makes it fairly straightforward for configuring multiple Observability backends, which is perfect for evaluating multiple vendors at the same time to see how each one handles your OTel data. If you run the app for an hour or so, you end up with a pretty high volume of data which you can then slice and dice and explore in your chosen backend(s).
Aaand that’s a wrap. This is my last blog post of 2024. Cheers to you, my lovely readers, for your continued support. Happy holidays, and here’s to an amazing 2025! 🧋🥂
And now, I will leave you with a photo of my rats, Buffy and Katie Jr., taken in January of this year, when they were babies. ❤️
Until next year, peace, love and code. ✌️💜👩💻