OpenTelemetry with Elastic Observability

Rahul Ranjan
5 min readMar 23, 2024

--

OpenTelemetry is an open-source framework for observability that, when combined with Elastic Observability, provides powerful insights into distributed systems. It enables organizations to efficiently monitor, troubleshoot, and optimize their applications. In this article, we will provide you with a detailed guide on how to set up an OpenTelemetry demo with Elastic Observability. We will cover essential steps, configurations, and best practices that will help you leverage the full potential of observability in your environment.

Understanding OpenTelemetry and Elastic Observability:

OpenTelemetry:

  • OpenTelemetry is a project under the Cloud Native Computing Foundation (CNCF) that aims to provide a cohesive approach to instrument, generate, collect, and export telemetry data, which includes metrics, traces, and logs, from software applications. It offers libraries for instrumenting code in various programming languages and provides standardized APIs to capture telemetry data from different components of distributed systems.

Elastic Observability:

  • Elastic Observability is a complete solution for observability provided by Elastic. It provides integrated tools for monitoring, logging, and tracing distributed applications. The solution includes Elastic APM (Application Performance Monitoring), Elastic Logs, and Elastic Metrics, all of which are seamlessly integrated within the Elastic Stack.

Setup Details:

This doc will cover both “How to set up the OpenTelemetry demo with Elastic Observability” using Docker compose or Kubernetes.

Download the source code of the application from GitHub Repo.

Download your application on a Kubernetes cluster in your cloud service of choice or local Kubernetes platform. First, clone the directory locally. Make sure you have kubectl and helm also installed locally:

git clone https://github.com/elastic/opentelemetry-demo.git
OTEL_EXPORTER_OTLP_ENDPOINT is Elastic's APM Server
OTEL_EXPORTER_OTLP_HEADERS Elastic Authorization

Under Integrations->APM in your Elastic cloud, find these values in OpenTelemetry setup instructions.

Docker compose

Start a free trial on Elastic Cloud and copy the endpoint and secretToken from the Elastic APM setup instructions in your Kibana.

  1. Open the file src/otelcollector/otelcol-config-extras.yml in an editor and replace the following two placeholders:
  • YOUR_APM_ENDPOINT_WITHOUT_HTTPS_PREFIX: Your Elastic APM endpoint (without https:// prefix) that must also include the port (for example: 987654.xyz.com:443).
  • YOUR_APM_SECRET_TOKEN: your Elastic APM secret token.

The updated file should look like the below(Make sure to note the actual format for the secret token including the Bearer keyword)

exporters:
otlp/elastic:
# !!! Elastic APM https endpoint WITHOUT the "https://" prefix
endpoint: "11111111111.apm.xyz.xyz.cloud.es.io:443"
compression: none
headers:
Authorization: "Bearer aaaaaaaaaaaaaaa"


exporters:
otlp/elastic:
# !!! Elastic APM https endpoint WITHOUT the "https://" prefix
endpoint: "11111111111.apm.xyz.xyz.cloud.es.io:443"
compression: none
headers:
Authorization: "Bearer aaaaaaaaaaaaaaa"

service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [spanmetrics, otlp/elastic]
metrics:
receivers: [otlp, spanmetrics]
processors: [batch]
exporters: [otlp/elastic]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp/elastic]

The updated file should look something like the above. Fixed a minor issue in the metrics section downloaded from GitHub.

Start the demo with the below command from the repository’s root directory:

docker-compose up -d

Kubernetes

Create a Kubernetes cluster. Set up Kubectl and Helm.

  1. Set up Elastic Observability on Elastic Cloud.
  2. Create a secret in Kubernetes with the following command.
kubectl create secret generic elastic-secret \
--from-literal=elastic_apm_endpoint='YOUR_APM_ENDPOINT_WITHOUT_HTTPS_PREFIX' \
--from-literal=elastic_apm_secret_token='YOUR_APM_SECRET_TOKEN'

Don’t forget to replace

  • YOUR_APM_ENDPOINT_WITHOUT_HTTPS_PREFIX: Your Elastic APM endpoint (without https:// prefix) that must also include the port (for example: 987654.xyz.com:443).
  • YOUR_APM_SECRET_TOKEN: Your Elastic APM secret token(Refer Docker-Compose Section).

3. Execute the following commands to deploy the OpenTelemetry demo to your Kubernetes cluster.

# switch to the kubernetes/elastic-helm directory
cd kubernetes/elastic-helm

# !(when running it for the first time) add the open-telemetry Helm repostiroy
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts

# !(when an older helm open-telemetry repo exists) update the open-telemetry helm repo
helm repo update open-telemetry

# deploy the demo through helm install
helm install -f values.yaml my-otel-demo open-telemetry/opentelemetry-demo

Once your application is up on Kubernetes, validate that all the pods are running in the default namespace.

kubectl get pods -n default

Kubernetes Monitoring:

This demo includes cluster-level metrics and Kubernetes events collection. To enable Node-level metrics collection and autodiscovery for Redis Pods, run an additional Otel collector Daemonset.

helm install daemonset open-telemetry/opentelemetry-collector --values daemonset.yaml

Explore and analyze the data With Elastic

View your OTel instrumented services in Kibana's APM Service Map. To access, go to APM in Elastic Observability UI and select servicemap.

Under Kibana.

If you are seeing this, it means that data is being sent to the Elastic cluster by the OpenTelemetry Collector. You can now explore the data and experiment with it.

To get a comprehensive understanding of all the services and transaction flows between them, you can refer to the APM service map (as demonstrated in the previous step). Additionally, you have the option to examine individual services and the collected transactions.

As you can see, the loadgenerator details are listed:

  • Average service latency
  • Throughput
  • Main transactions
  • Failed traction rate
  • Errors
  • Dependencies

Now Click on Transactions → GET(or any request) and we can see the full trace with all the spans. You can further explore and analyze data, examining it with minute detail.

Elastic utilizes machine learning to identify potential latency issues across services by analyzing the trace. Users can easily access the Latency Correlations tab and run the correlation.

Analyze your data with Elastic machine learning (ML)

After integrating OpenTelemetry metrics with Elastic, you can begin to analyze your data using Elastic's machine-learning capabilities.

--

--