Tracing NodeJs Applications with OpenTelemetry

Fabio Reis
직방 기술 블로그
7 min readSep 15, 2023

--

When we think about monitoring, the first thing that comes to our minds are logs and metrics. We oftentimes underestimate traces because of the extra work we need to set up traces, whereas metrics and logs on the other hand are often automatically provided when using any cloud platform.

The combination of metrics, logs, traces, and alarms is known as observability. Different from monitoring, which only tells us that there is something wrong within our system, for example with metrics, the goal of observability is to correlate the data collected trying to help us identify what, where, and why something is wrong.

In this blog post, we will show you how to collect traces by setting up a simple observability stack with node js and typescript. Although this is not the focus of this post, Grafana also works perfectly integrating with external metrics and logs you must already have, and we encourage you to take a look at their other integrations.

Prerequisites

We must install Grafana and Loki so we can add telemetry to our test application. For that reason, you should have the following tools installed. Please follow their respective installation guide.

Setting Helm Repositories and Containers

Helm is a package manager that will help us set up our Kubernetes environment with minimal effort. After installing Helm, you can add Grafana and Tempo repositories with the following commands.

#adding Grafana charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

Installing Tempo

For this example, we will use the standard configuration of tempo, but if you wish to customize any values please check tempo helm documentation and examples. You can easily customize your distribution by creating a YAML file and applying it with Helm.

helm upgrade --install tempo grafana/tempo

Tempo supports many receivers and protocols and for the purpose of this tutorial, we will use OTLP receiver with HTTP protocol (port 4318). If you are interested in other receivers take a look o Tempo's documentation.

Installing Grafana

In order to install Grafana we need to configure our Helm template to add our tempo container as a data source, so we can visualize our traces on Grafana. For more information on Grafana customization please check Helm documentation.

  • Add the following content to a file called grafana-helm.yaml
env:
GF_AUTH_ANONYMOUS_ENABLED: true
GF_AUTH_ANONYMOUS_ORG_ROLE: 'Admin'
GF_AUTH_DISABLE_LOGIN_FORM: true

datasources:
datasources.yaml:
apiVersion: 1

datasources:
- name: Tempo
type: tempo
access: proxy
orgId: 1
url: http://tempo:3100
basicAuth: false
isDefault: true
version: 1
editable: false
apiVersion: 1
uid: tempo
  • Install Grafana with the YAML file we just created
helm upgrade -f grafana-helm.yaml --install grafana grafana/grafana

Configure Nginx-controller

We will need to access both our Grafana and our tempo services so therefore we need to expose the service with Nginx-controller and k8 ingress, so we can use it in our test application.

  1. Install Nginx-controller
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace

2. Creating Ingress for the services

  • Save the following content to a file called ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
namespace: default
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx
rules:
- http:
paths:
- pathType: Prefix
backend:
service:
name: grafana
port:
number: 80
path: /
- http:
paths:
- pathType: Prefix
backend:
service:
name: tempo
port:
number: 4318
path: /v1/traces
  • Create ingress with kubectl
kubectl create -f ingress.yaml

Testing Grafana

After completing the setup you will be able to access both services in the following addresses. If you wish to use a host instead you can change your ingress file as needed. Check K8 documentation for more details.

Grafana: http://localhost

Sample of Grafana dashboard

You can also check all the Kubernetes resources we created with the following command:

#services and pods
kubectl get all -A

#for our custom ingress
kubectl get ingress

Other Observaibility Integrations

This is not the main focus of this tutorial, but if you wish you can also use metrics, logs, and alerts with Grafana. In order to do that you can use Prometheus for metrics, and Loki for logs for example. The installation process can be done in the same way we did with Helm. For more information check Prometheus Helm documentation and Loki Helm documentation.

Setting Up OpenTelemetry with NodsJs

We want to add traces to any NodeJs API, so you can actually use your own project if you wish. If you don't already have a project you can use any open-source sample available on GitHub, like for example the cats API on NestJs repository:

https://github.com/nestjs/nest/tree/master/sample/10-fastify

Adding OpenTelemetry Dependencies

Add the following dependencies to your project:

npm install @opentelemetry/api @opentelemetry/auto-instrumentations-node @opentelemetry/exporter-trace-otlp-http @opentelemetry/sdk-node @opentelemetry/sdk-trace-node

Adding Tracer and Auto-instrumentation

On your src folder add a tracer.ts file with the following content:

import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { NodeSDK } from '@opentelemetry/sdk-node';
import {
BasicTracerProvider,
ConsoleSpanExporter,
SimpleSpanProcessor,
} from '@opentelemetry/sdk-trace-node';
import { Resource } from '@opentelemetry/resources';
import { SemanticResourceAttributes } from '@opentelemetry/semantic-conventions';

class Tracer {
private sdk: NodeSDK | null = null;

// url is optional and can be omitted - default is http://localhost:4318/v1/traces
private exporter = new OTLPTraceExporter({ url: "http://localhost/v1/traces"});

private provider = new BasicTracerProvider({
resource: new Resource({
[SemanticResourceAttributes.SERVICE_NAME]: 'DEMO-APP',
}),
});

public init() {
try {
// export spans to console (useful for debugging)
this.provider.addSpanProcessor(
new SimpleSpanProcessor(new ConsoleSpanExporter()),
);

// export spans to opentelemetry collector
this.provider.addSpanProcessor(new SimpleSpanProcessor(this.exporter));
this.provider.register();

this.sdk = new NodeSDK({
traceExporter: this.exporter,
instrumentations: [
getNodeAutoInstrumentations({
// Lets disable fs for now, otherwise we cannot see the traces we want,
// You can disable or enable instrumentation as needed
'@opentelemetry/instrumentation-fs': { enabled: false },
}),
],
});

this.sdk.start();

console.info('The tracer has been initialized');
} catch (e) {
console.error('Failed to initialize the tracer', e);
}
}
}

export default new Tracer();

Initialize our Tracer

Make sure you initialize your tracer before your server starts, otherwise, auto-instrumentation won't work properly. On your main.ts file (or your entry point class) add the following code.

import tracer from './tracer'
tracer.init()

//Other imports
...

async function bootstrap() {
...
}

bootstrap()

Testing our Telemetry

After running your application let's use some of the APIs so we can generate some traces. For the cat API example, you can run one of the commands below.

# POST cats
curl -X POST http://localhost:3000/cats -H "Content-Type: application/json" -d '{
"name":"test",
"age": 18,
"breed": "test"
}'

# GET cats
curl http://localhost:3000/cats

Check Telemetry on Grafana

After collecting some data, let's check our traces on Grafana. On the Explore menu, Tempo will be selected automatically, but if not, select Tempo in the data source drop-down list.

On the query options, select search and input the name of the service we configured in our tracer.ts file (DEMO-APP). You will be able to see all the instrumentation collected with auto instrumentation.

Note that auto instrumentation collects different information for each library. For example, if you check our HTTP spans you will be able to see many of our request parameters. Now, if we were using MySQL, we would be able to see up to the query executed for that specific trace. For each instrumentation, check their respective documentation specifically.

If you are curious about which libraries are instrumented automatically, check the documentation on the @opentelemetry/auto-instrumentations-node npm page.

Instrumentation sample

Adding Manual Span

For this tutorial, we've only touched the auto-instrumentation setup, and even though this alone helps us to see the path our users take inside of our system, it does not always provide us with all the information we need.

OpenTelemetry also allows us to define our own custom spans and add them to our traces. This is useful for example when you need to add business information to the telemetry.

You can play around with your own custom spans by adding them to your API code. Follow the documentation for more information.

Extra (Telemetry Propagation)

Trace opens a whole lot of possibilities for us. However, we only saw a small part of it in this tutorial. OpenTelemetry also works really well with distributed systems and provides us with tools to integrate APIs together in the same trace. This is extremely useful when working with microservice architecture, where multiple APIs are triggered for a single user action.

In order to connect microservices together, you should configure OpenTelemetry to send and receive traces to and from other APIs. You can easily achieve that using custom headers and openTelemetry node SDK.

OpenTelemetry provides us with some examples of how to set up our API to communicate with other services. Check the documentation on propagation for more details.

--

--

Fabio Reis
직방 기술 블로그

My biggest goal is to create solutions and projects that impact people’s lives