OpenTelemetry Frontend Demo

Can YAMAN
NEW IT Engineering
Published in
6 min readJul 8, 2024
Photo by Justin Morgan on Unsplash

This is the first post in the OpenTelemetry demo series that I have created. It focuses on tracing, particularly from the client-side perspective. You might consider it a complementary demo to the official OpenTelemetry demo.

In the current enterprise development environment, digitalization is inevitable, and every enterprise company should use its own internal tools, with data being the most critical asset. In this context, securing confidential data necessary for internal tools is another constraint for observability. While public web analytics tools like Google Analytics can provide the needed data for incidents or performance investigations, this is not the case for internal tools and confidential environments.

In this demo, I will cover how OpenTelemetry addresses the gap created by the lack of analytics tools, particularly in enterprise environments.

Setup

Let me define my setup for this demo. This project is designed to run on an Apple Silicon-based developer machine. Podman, an open-source alternative for virtualization, will be used.

Structure of the frontend

There are multiple frontend applications that depend on the most common frameworks and libraries, such as Angular and React.js. A load balancer acts as the gateway for the system. It is a prerequisite that security and authentication are managed by TLS connections and the gateway.

Components

  • SPA Frontend Applications: Angular app, ReactJS app
  • Load Balancer (Gateway): Nginx server configured as a reverse proxy
  • Angular Application Server: SSR supported Node.js server
  • React Application Server: Express.js based server
  • Sample Backend Service: Echo server for following headers
  • OpenTelemetry Collector: To collect spans and trace data
  • Grafana Tempo: To store spans
  • Grafana Dashboard: To visualize the trace data

Test Flow

  1. The user opens a browser and enters the URL.
  2. The browser reaches out to the load balancer.
  3. The load balancer forwards the request to the file server, which contains the HTML and other resources for the website.
  4. The resources are loaded, and the website is rendered.
  5. The user interacts with the website.
  6. The SPA application calls backend services and receives responses.

For the JavaScript frontend side, you can find further implementation details on the OpenTelemetry website and their resources. The simplest integration method is auto-instrumentation. The instrument.ts file serves as our sample integration file. It needs to be imported into the root of the SPA app. This file specifies the target URL to push traces to the OpenTelemetry collector. As in an enterprise environment, the setup should be secured and protected. In this scenario, the collector is located behind the load balancer/gateway. The connection protocol used is HTTP to enhance compatibility. Below is the configuration for the collector and Nginx to forward the tracing data. This setup will also demonstrate that the OpenTelemetry Protocol is stateless and can operate under any type of load balancer.

instrument.ts

import { registerInstrumentations } from '@opentelemetry/instrumentation';
import { getWebAutoInstrumentations } from '@opentelemetry/auto-instrumentations-web';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { ZoneContextManager } from '@opentelemetry/context-zone';
import { Resource } from '@opentelemetry/resources';
import { SEMRESATTRS_SERVICE_NAME, SEMRESATTRS_SERVICE_NAMESPACE} from '@opentelemetry/semantic-conventions'

import {
WebTracerProvider,
ConsoleSpanExporter,
SimpleSpanProcessor,
BatchSpanProcessor,
} from '@opentelemetry/sdk-trace-web';

const provider = new WebTracerProvider({
resource: new Resource({
[SEMRESATTRS_SERVICE_NAME]: "angular-web",
[SEMRESATTRS_SERVICE_NAMESPACE]: "website"
}),
});

// For demo purposes only, immediately log traces to the console
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));

// Batch traces before sending them to HoneyComb
provider.addSpanProcessor(
new BatchSpanProcessor(
new OTLPTraceExporter({
url: 'http://localhost/v1/traces',
headers: {
'x-custom-header-key': 'CUSTOM_HEADER_VALUE',
},
}),
),
);

provider.register({
contextManager: new ZoneContextManager(),
});

registerInstrumentations({
instrumentations: [
getWebAutoInstrumentations({
'@opentelemetry/instrumentation-document-load': {},
'@opentelemetry/instrumentation-user-interaction': {},
'@opentelemetry/instrumentation-fetch': {},
'@opentelemetry/instrumentation-xml-http-request': {},
}),
],
});

main.ts

import { bootstrapApplication } from '@angular/platform-browser';
import { appConfig } from './app/app.config';
import { AppComponent } from './app/app.component';
import './instrument';

bootstrapApplication(AppComponent, appConfig)
.catch((err) => console.error(err));

This Nginx configuration samples all requests except for tracing data itself. It means that Nginx, while forwarding the request, adds the W3 trace headers into the HTTP request. Then, the backend can retrieve this trace data and generate a child span based on it. Other backend services also propagate the trace data and generate spans from the propagated data. Each service, such as frontend applications running in the browser, should deliver this data to the OpenTelemetry collector, which stores them into Grafana Tempo.

nginx.conf

load_module modules/ngx_otel_module.so;

user nginx;
worker_processes auto;

error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;


events {
worker_connections 1024;
}


http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

otel_trace on;
otel_service_name nginx;
otel_trace_context propagate;

otel_exporter {
endpoint collector:4317;
}


log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'b3="$otel_parent_sampled" [$otel_parent_id] -> "$otel_trace_id-$otel_span_id"';

access_log /var/log/nginx/access.log main;

sendfile on;
#tcp_nopush on;

keepalive_timeout 65;

#gzip on;

include /etc/nginx/conf.d/*.conf;
}

conf.d/default.conf

server {
listen 80;
listen [::]:80;

server_name localhost;

location /echo {
proxy_set_header "Connection" "" ;

proxy_pass http://echo:8888/echo;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /v1/traces {
otel_trace off;

proxy_pass http://collector:4318/v1/traces;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /pages {
proxy_pass http://pages:4001/pages;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
proxy_pass http://web:4000/;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

docker-compose

nginx:
image: nginx:1.25-alpine-otel
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:z
- ./nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf:z
ports:
- "80:80"
depends_on:
- echo
- web
echo:
image: mendhak/http-https-echo:33
environment:
- HTTP_PORT=8888
- HTTPS_PORT=8443
- PROMETHEUS_ENABLED=true
ports:
- "9888:8888"
- "9443:8443"
web:
image: localhost/web-app:latest
environment:
- OTEL_TRACES_EXPORTER=otlp
- OTEL_EXPORTER_OTLP_INSECURE=true
- OTEL_EXPORTER_OTLP_SPAN_INSECURE=true
- OTEL_EXPORTER_OTLP_METRIC_INSECURE=true
- OTEL_EXPORTER_OTLP_ENDPOINT=http://collector:4318
- OTEL_RESOURCE_ATTRIBUTES="service.namespace=web-app,service.name=nodejs,service.version=0.0.1"
- OTEL_NODE_RESOURCE_DETECTORS="env,host,os,process,container"
- OTEL_SERVICE_NAME=nodejs
depends_on:
- collector
ports:
- "4000:4000"
pages:
image: localhost/reactjs-pages:latest
environment:
- OTEL_TRACES_EXPORTER=otlp
- OTEL_EXPORTER_OTLP_INSECURE=true
- OTEL_EXPORTER_OTLP_SPAN_INSECURE=true
- OTEL_EXPORTER_OTLP_METRIC_INSECURE=true
- OTEL_EXPORTER_OTLP_ENDPOINT=http://collector:4318
- OTEL_RESOURCE_ATTRIBUTES="service.namespace=reactjs-pages,service.name=nodejs,service.version=0.1.0"
- OTEL_NODE_RESOURCE_DETECTORS="env,host,os,process,container"
- OTEL_SERVICE_NAME=nodejs
depends_on:
- collector
ports:
- "4001:4001"

collector.conf

receivers:
otlp:
protocols:
http: {} # HTTP protocol for the OTLP receiver
# Processors for the OpenTelemetry Collector
processors:
# batch:
# send_batch_size: 10 # Size of the batches to send
# timeout: 10s # Timeout for the batch processor
# memory_limiter:
# check_interval: 5s
# limit_mib: 4000
# spike_limit_mib: 500

# Exporters for the OpenTelemetry Collector
exporters:
otlp:
endpoint: tempo:4317
tls:
insecure: true

# Service configuration for the OpenTelemetry Collector
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [otlp]

Grafana tracing screen

First load of the angular SPA:

First load of Angular SPA
Web service call by click

This demo shows that OpenTelemetry browser integration is super simple with web auto instrumentation. Not only all metrics in browser are available in grafana, but also frontend and backend trace data are combined together.

Resources:

From RUM to Front-End Observability with OpenTelemetry:

https://youtu.be/l2_wsvv-Rhs?si=QQ2I8iCUW_PDTSaE

React application:

https://developers.redhat.com/articles/2023/03/22/how-enable-opentelemetry-traces-react-applications

Angular application:

--

--