Getting Started with OpenTelemetry in distributed Go Microservices

Iván Corrales Solera
Wesovi Labs
Published in
12 min readJan 26, 2024

--

Introduction

In the ever-changing world of software development, microservices have become the go-to approach, offering scalability, flexibility, and easy maintenance. But, of course, only good comes with its fair share of challenges, especially in the observability department. As systems get more intricate, keeping tabs on the data flow through different services becomes a top priority for troubleshooting, optimizing performance, and ensuring reliability.

Enter OpenTelemetry, the superhero of observability. It’s an open-source framework designed to make collecting distributed traces and metrics from applications a breeze. With OpenTelemetry, developers can get a peek into how microservices interact, making it a breeze to spot bottlenecks, diagnose issues, and fine-tune the overall system performance.

Observability, in the microservices world, is like having a crystal ball for your system’s internal state based on what it’s showing on the outside. It’s the trio of tracing, metrics, and logging — essential for keeping a microservices architecture healthy and reliable.

So, buckle up as we dive into this tutorial, breaking down the essential concepts of OpenTelemetry in Go microservices. From getting the framework ready for your project to jazzing up your code with instruments, exporting and visualizing data, and sprinkling in some best practices, this guide is your roadmap to elevating the observability game in your microservices architecture.

Whether you’re a seasoned Go developer or just dipping your toes into the microservices pool, mastering OpenTelemetry can be a game-changer for the upkeep and performance of your distributed systems. Ready to unlock the superpowers of observability in Go microservices with OpenTelemetry? Let’s roll!

Opentelemetry concepts

Tracing 🕵️‍♂️

Tracing, a fundamental aspect of OpenTelemetry, involves tracking the flow of a request as it traverses through various components or services in a distributed system. OpenTelemetry provides a tracing API that allows developers to instrument their code and create spans, which represent units of work. These spans are then collected to form distributed traces, offering a visual representation of the journey a request takes through the microservices.

Context Propagation 🌐

In microservices, requests often span multiple services. Context propagation is a crucial aspect to ensure that information, such as trace context, is carried along with the request as it moves through different components. OpenTelemetry seamlessly instruments context propagation, allowing trace information to flow consistently across various services.

Metrics 📊

Metrics are quantitative measurements of system behavior over time. They provide insights into the performance, resource usage, and other aspects of the system. OpenTelemetry supports the instrumentation of code to collect and export metrics, including counters, gauges, and histograms. This capability enables developers to gain a quantitative understanding of how their microservices are behaving.

Instrumentation 🎺

Instrumentation involves adding code to an application to collect data for observability purposes. OpenTelemetry provides instrumentation libraries for various programming languages, making it easier for developers to integrate observability into their applications. This includes creating spans for tracing and emitting metrics and contributing to a comprehensive observability strategy.

Exporter 🚚

Exporters are components responsible for transmitting collected observability data, including spans and metrics, to external systems or backends for analysis. OpenTelemetry supports various exporters compatible with popular observability backends such as Jaeger, Prometheus, and more. This flexibility ensures interoperability across different components of a microservices ecosystem.

Collector 🧲

The OpenTelemetry Collector is an intermediate component that receives observability data from instrumented applications and forwards it to the desired backend or storage. It is configurable and extensible, providing a centralized point for processing and exporting traces and metrics.

Sampling 🎰

Sampling involves deciding which traces or spans to collect and export. This is crucial to balance the overhead of collecting data with the need for comprehensive observability. OpenTelemetry offers sampling options, allowing users to control the rate at which traces are collected, striking a balance between resource usage and observability granularity.

Understanding these foundational concepts is essential for effectively implementing OpenTelemetry in a microservices environment. This knowledge enhances observability and troubleshooting capabilities, contributing to the overall success of distributed systems.

Setting Up OpenTelemetry in a Go Project

This section will guide you through setting up OpenTelemetry in your Go microservices project. This involves installing the necessary packages, configuring basic settings, and choosing an appropriate exporter for sending traces and metrics to an external observability backend.

Installation

First, install the OpenTelemetry Go SDK using Go Modules. Open your terminal and run the following command:

go get go.opentelemetry.io/otel

This fetches the OpenTelemetry Go SDK and its dependencies, ensuring you have the latest version.

Configuring OpenTelemetry

Create a new Go file (e.g., main.go) and import the necessary OpenTelemetry packages. Configure the basic settings, such as the tracer provider and exporter. Below is a basic configuration snippet:

package main

import (
"context"
"time"

"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.24.0"
)

func main() {
ctx := context.Background()
// Create the exporter - let's use a stdout exporter
traceExporter, err := stdouttrace.New(stdouttrace.WithPrettyPrint())
if err != nil {
panic(err)
}

// Create the resource to be traced
res, err := resource.Merge(
resource.Default(),
resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceName("MyService"),
semconv.ServiceVersion("v0.0.1"),
),
)
if err != nil {
panic(err)
}

// Configure the trace provider
traceProvider := trace.NewTracerProvider(
trace.WithBatcher(traceExporter, trace.WithBatchTimeout(2*time.Second)),
trace.WithResource(res),
)
defer func() { _ = traceProvider.Shutdown(ctx) }()
otel.SetTracerProvider(traceProvider)
}

Choosing and Configuring an Exporter

Select an appropriate exporter based on your observability backend.

The project is actively developed, and the list of supported exporters is continuously expanded. To get the most up-to-date information, it’s recommended to check the official OpenTelemetry documentation or repository on GitHub. Some of the most popular exporters are enum below:

  1. Jaeger Exporter: Sends traces to Jaeger, an open-source, end-to-end distributed tracing system.
  2. Zipkin Exporter: Exports traces to Zipkin, a distributed tracing system.
  3. Prometheus Exporter: Allows exporting metrics to Prometheus, a monitoring and alerting toolkit.
  4. OTLP Exporter: The OpenTelemetry Protocol (OTLP) exporter sends traces and metrics to services that support the OpenTelemetry protocol.
  5. Logging Exporter: Exports trace and metric data as logs.
  6. Honeycomb Exporter: Sends trace and metric data to Honeycomb, a service for observability.

For example, you would choose the Jaeger exporter if you're using Jaeger. Adjust the configuration accordingly:

We would only need to replace the previous traceExporter definition as It’s shown:


traceExporter, err := stdouttrace.New(stdouttrace.WithPrettyPrint())

// replace by

client := otlptracehttp.NewClient(otlptracehttp.WithEndpoint("http://localhost:14268/api/traces"), otlptracehttp.WithInsecure(), otlptracehttp.WithCompression(otlptracehttp.NoCompression))
traceExporter, err := otlptrace.New(ctx, client)

Adjust the collector endpoint based on your Jaeger setup.

With these snippets, you’ve initialized OpenTelemetry in your Go project, configured basic settings, and chosen an exporter.

Instrumenting Go Code

In this section, we’ll delve into the fundamental aspects of instrumenting your Go microservices code with OpenTelemetry. We’ll cover the basics of tracing, creating and managing traces, setting up context propagation, and incorporating metric instrumentation.

Instrumentation Basics

Instrumentation involves adding code to your application to capture relevant data for observability. In the context of OpenTelemetry, this includes creating and managing traces.

package main

import (
"context"
"time"

"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.24.0"
)

func main() {
ctx := context.Background()
// Create the exporter - let's use a stdout exporter
traceExporter, err := stdouttrace.New(stdouttrace.WithPrettyPrint())
if err != nil {
panic(err)
}

// Create the resource to be traced
res, err := resource.Merge(
resource.Default(),
resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceName("MyService"),
semconv.ServiceVersion("v0.0.1"),
),
)
if err != nil {
panic(err)
}

// Configure the trace provider
traceProvider := trace.NewTracerProvider(
trace.WithBatcher(traceExporter, trace.WithBatchTimeout(2*time.Second)),
trace.WithResource(res),
)
defer func() { _ = traceProvider.Shutdown(ctx) }()
otel.SetTracerProvider(traceProvider)

// Create & start the tracer
tracer := traceProvider.Tracer("MyService")
ctx, span := tracer.Start(context.Background(), "hello-span")
span.SetAttributes(attribute.String("environment", "staging"))
defer span.End()

// Add events
span.AddEvent("startEvent")
// ...
span.AddEvent("endEvent")
}

In this snippet, we use the OpenTelemetry tracer to start a new span, representing a unit of work. The defer span.End() ensures that the span is properly closed when the operation is completed.

Tracing

Tracing involves creating a sequence of spans to represent the flow of requests through your microservices. Here’s an example of tracing multiple operations across services:

package main

import (
"context"
"sync"
"time"

"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.24.0"
)

func main() {
ctx := context.Background()
// Create the exporter - let's use a stdout exporter
traceExporter, err := stdouttrace.New(stdouttrace.WithPrettyPrint())
if err != nil {
panic(err)
}

// Create the resource to be traced
res, err := resource.Merge(
resource.Default(),
resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceName("MyService"),
semconv.ServiceVersion("v0.0.1"),
),
)
if err != nil {
panic(err)
}

// Configure the trace provider
traceProvider := trace.NewTracerProvider(
trace.WithBatcher(traceExporter, trace.WithBatchTimeout(2*time.Second)),
trace.WithResource(res),
)
defer func() { _ = traceProvider.Shutdown(ctx) }()
otel.SetTracerProvider(traceProvider)
var wg sync.WaitGroup
wg.Add(2)
go operationA(ctx, &wg)
go operationB(ctx, &wg)
wg.Wait()

}

func operationA(ctx context.Context, wg *sync.WaitGroup) {
defer wg.Done()
// Create & start the tracer
tracer := otel.Tracer("MyService")
ctx, span := tracer.Start(context.Background(), "operationA")
span.SetAttributes(attribute.String("environment", "staging"))
defer span.End()

for i := 0; i < 5; i++ {
// Add events
span.AddEvent("iterate operationA")
time.Sleep(1 * time.Second)
}

}

func operationB(ctx context.Context, wg *sync.WaitGroup) {
defer wg.Done()
// Create & start the tracer
tracer := otel.Tracer("MyService")
ctx, span := tracer.Start(context.Background(), "operationB")
span.SetAttributes(attribute.String("environment", "staging"))
defer span.End()

for i := 0; i < 5; i++ {
// Add events
span.AddEvent("iterate operationB")
time.Sleep(2 * time.Second)
}
}

This example demonstrates how to create a trace that spans multiple operations within the same and different services.

Metrics

OpenTelemetry also supports metric instrumentation to collect quantitative data about your application’s performance. Below is a simple example of creating and exporting a custom metric:

package main

import (
"context"
"time"

"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/stdout/stdoutmetric"
"go.opentelemetry.io/otel/metric"
api "go.opentelemetry.io/otel/sdk/metric"
"go.opentelemetry.io/otel/sdk/resource"
semconv "go.opentelemetry.io/otel/semconv/v1.24.0"
)

func main() {
ctx := context.Background()
// Create the exporter - let's use a stdout exporter
metricExporter, err := stdoutmetric.New(stdoutmetric.WithPrettyPrint())

if err != nil {
panic(err)
}

// Create the resource to be traced
res, err := resource.Merge(
resource.Default(),
resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceName("MyService"),
semconv.ServiceVersion("v0.0.1"),
),
)
if err != nil {
panic(err)
}

// Configure the meter provider
meterProvider := api.NewMeterProvider(
api.WithResource(res),
api.WithReader(api.NewPeriodicReader(metricExporter,
api.WithInterval(1*time.Second))),
)

defer func() { _ = meterProvider.Shutdown(ctx) }()
otel.SetMeterProvider(meterProvider)

meter := otel.Meter("wesovilabs-meter")
// Define a counter metric
counter, err := meter.Int64Counter("my_service.int_counter",
metric.WithDescription("int_counter description"))
if err != nil {
panic(err)
}
for i := 0; i < 10; i++ {
// Record an occurrence of an event
counter.Add(context.Background(), 1)
time.Sleep(300 * time.Millisecond)
}
}

In this snippet, we create a counter metric and record an occurrence of an event. This metric can be exported to an observability backend for further analysis.

With these snippets, you’ve first instrumented your Go code with OpenTelemetry for tracing and metric collection.

Showcase: Observability for distributed microservices

In this showcase, we delve into the seamless integration of OpenTelemetry within a microservices architecture, orchestrating communication between two distinct services while efficiently capturing metrics and traces. Our demonstration highlights the utilization of Prometheus for metric persistence, Grafana for visualizing these metrics, and Jaeger for tracing.

Clone the repository from https://github.com/wesovilabs/getting-started-opentelemety-go

Architecture Overview:

Microservice Ping: This microservice, acting as the initiator, enables a single endpoint /ping that sends requests to the endpoint /pong in microservice Pong.

package main

import (
"context"
"fmt"
"log"
"net/http"
"os"
"time"

"github.com/prometheus/client_golang/prometheus/promhttp"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
"go.opentelemetry.io/otel/exporters/prometheus"
"go.opentelemetry.io/otel/metric"
api "go.opentelemetry.io/otel/sdk/metric"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.24.0"
)

func main() {

pongEndpoint := os.Getenv("PONG_ENDPOINT")
address := os.Getenv("ADDRESS")
traceBackendEndpoint := os.Getenv("JAEGER_ADDRESS")
ctx := context.Background()

// Create the resource to be observed
res, err := resource.Merge(
resource.Default(),
resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceName("Ping"),
semconv.ServiceVersion("v0.0.1"),
),
)
if err != nil {
panic(err)
}

// Tracing configuration

traceClient := otlptracehttp.NewClient(otlptracehttp.WithEndpoint(traceBackendEndpoint), otlptracehttp.WithInsecure(), otlptracehttp.WithCompression(otlptracehttp.NoCompression))
traceExporter, err := otlptrace.New(ctx, traceClient)
if err != nil {
panic(err)
}
traceProvider := trace.NewTracerProvider(
trace.WithBatcher(traceExporter, trace.WithBatchTimeout(2*time.Second)),
trace.WithResource(res),
)
defer func() { _ = traceProvider.Shutdown(ctx) }()
otel.SetTracerProvider(traceProvider)

tracer := traceProvider.Tracer("Ping")

// Metric configuration

prometheusExporter, err := prometheus.New()
if err != nil {
panic(err)
}
meterProvider := api.NewMeterProvider(
api.WithResource(res),
api.WithReader(prometheusExporter),
)
meter := otel.Meter(
"wesovilabs.com/tutorial/opentelemetry/ping/manual-instrumentation",
metric.WithInstrumentationVersion("v0.0.1"),
)
counter, err := meter.Int64Counter(
"request_count",
metric.WithDescription("Incoming request count"),
metric.WithUnit("request"),
)
if err != nil {
log.Fatalln(err)
}
hist, err := meter.Float64Histogram(
"duration",
metric.WithDescription("Incoming end to end duration"),
metric.WithUnit("milliseconds"),
)
if err != nil {
log.Fatalln(err)
}

defer func() { _ = meterProvider.Shutdown(ctx) }()
otel.SetMeterProvider(meterProvider)

// HTTP Endpoints
//Used to expose metrics in prometheus format
http.Handle("/metrics", promhttp.Handler())
// Endpoint to be observer
http.HandleFunc("/ping", func(w http.ResponseWriter, req *http.Request) {
ctx, span := tracer.Start(context.Background(), "ping-request")
span.SetAttributes(attribute.String("environment", "staging"))
defer span.End()
span.AddEvent("Start request processing")
requestStartTime := time.Now()
span.AddEvent("Invoke external endpoint")
if _, err := http.Get(fmt.Sprintf("http://%s/pong", pongEndpoint)); err != nil {
span.AddEvent("Response with error")
w.Write([]byte(err.Error()))
} else {
span.AddEvent("Response success")
w.Write([]byte("ok"))
}
elapsedTime := float64(time.Since(requestStartTime)) / float64(time.Millisecond)
// Record measurements
attrs := metric.WithAttributes(attribute.String("remoteAddr", req.RemoteAddr), attribute.String("userAgent", req.UserAgent()))
span.AddEvent("Update metrics")
counter.Add(ctx, 1, attrs)
hist.Record(ctx, elapsedTime, attrs)
})

http.ListenAndServe(address, nil)

}

Microservice Pong: The recipient microservice could receive requests from microservice Ping or be invoked directly.

package main

import (
"context"
"log"
"net/http"
"os"
"time"

"github.com/prometheus/client_golang/prometheus/promhttp"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
"go.opentelemetry.io/otel/exporters/prometheus"
"go.opentelemetry.io/otel/metric"
api "go.opentelemetry.io/otel/sdk/metric"
"go.opentelemetry.io/otel/sdk/resource"
"go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.24.0"
)

func main() {
address := os.Getenv("ADDRESS")
traceBackendEndpoint := os.Getenv("JAEGER_ADDRESS")

ctx := context.Background()

// Create the resource to be observed
res, err := resource.Merge(
resource.Default(),
resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceName("Pong"),
semconv.ServiceVersion("v0.0.1"),
),
)
if err != nil {
panic(err)
}

// Tracing configuration

traceClient := otlptracehttp.NewClient(otlptracehttp.WithEndpoint(traceBackendEndpoint), otlptracehttp.WithInsecure(), otlptracehttp.WithCompression(otlptracehttp.NoCompression))
traceExporter, err := otlptrace.New(ctx, traceClient)
if err != nil {
panic(err)
}
traceProvider := trace.NewTracerProvider(
trace.WithBatcher(traceExporter, trace.WithBatchTimeout(2*time.Second)),
trace.WithResource(res),
)
defer func() { _ = traceProvider.Shutdown(ctx) }()
otel.SetTracerProvider(traceProvider)

tracer := traceProvider.Tracer("Pong")

// Metric configuration

prometheusExporter, err := prometheus.New()
if err != nil {
panic(err)
}
meterProvider := api.NewMeterProvider(
api.WithResource(res),
api.WithReader(prometheusExporter),
)
meter := otel.Meter(
"wesovilabs.com/tutorial/opentelemetry/pong/manual-instrumentation",
metric.WithInstrumentationVersion("v0.0.1"),
)
counter, err := meter.Int64Counter(
"request_count",
metric.WithDescription("Incoming request count"),
metric.WithUnit("request"),
)
if err != nil {
log.Fatalln(err)
}
hist, err := meter.Float64Histogram(
"duration",
metric.WithDescription("Incoming end to end duration"),
metric.WithUnit("milliseconds"),
)
if err != nil {
log.Fatalln(err)
}

defer func() { _ = meterProvider.Shutdown(ctx) }()
otel.SetMeterProvider(meterProvider)

// HTTP Endpoints
//Used to expose metrics in prometheus format
http.Handle("/metrics", promhttp.Handler())
// Endpoint to be observer
http.HandleFunc("/pong", func(w http.ResponseWriter, req *http.Request) {
ctx, span := tracer.Start(context.Background(), "pong-request")
span.SetAttributes(attribute.String("environment", "staging"))
defer span.End()
requestStartTime := time.Now()
span.AddEvent("Response success")
_, _ = w.Write([]byte("pong"))

elapsedTime := float64(time.Since(requestStartTime)) / float64(time.Millisecond)

// Record measurements
span.AddEvent("Update metrics")
attrs := metric.WithAttributes(attribute.String("remoteAddr", req.RemoteAddr), attribute.String("userAgent", req.UserAgent()))
counter.Add(ctx, 1, attrs)
hist.Record(ctx, elapsedTime, attrs)
})

http.ListenAndServe(address, nil)

}

Observability

Opentelemetry integration allows for the automatic instrumentation of these services, capturing critical metrics and traces without manual intervention.

Metrics Persistence: Prometheus, a widely used monitoring system, is employed to persist metrics collected by OpenTelemetry. Prometheus efficiently stores time-series data, enabling detailed analysis and system performance monitoring over time.

Metrics Visualization: Grafana, a powerful visualization and monitoring platform, is leveraged to provide insightful visualizations of the metrics stored in Prometheus. Grafana’s flexible dashboarding capabilities enable the creation of customized dashboards, offering real-time insights into system behavior and performance.

Tracing Management: Jaeger, an open-source distributed tracing system, manages traces generated by OpenTelemetry. Jaeger provides end-to-end visibility into the flow of requests between microservice Ping and microservice Pong, facilitating the identification of latency bottlenecks and performance optimizations.

Demonstration Flow

  1. Initiation: microservice Ping initiates a request to microservice Pong, triggering a series of interactions between the two services.
  2. Metrics Collection: OpenTelemetry automatically collects metrics related to the request processing, including latency, error rates, and resource utilization.
  3. Metrics Persistence: The collected metrics are persisted in Prometheus, ensuring that historical data is available for analysis and monitoring purposes.
  4. Metrics Visualization: Grafana retrieves metrics from Prometheus and presents them through intuitive dashboards, providing stakeholders real-time insights into system performance.
  5. Tracing Analysis: Jaeger captures traces of the request flow between microservice Ping and microservice Pong, allowing for in-depth analysis of request paths and identification of potential performance optimizations.

Launch the services

To simplify the deployment of this showcase, we will use the below docker-compose.yml

services:

ping:
container_name: ping
build:
context: .
dockerfile: Dockerfile
environment:
ADDRESS: 0.0.0.0:8081
PONG_ENDPOINT: pong:8082
JAEGER_ADDRESS: jaeger:4318
entrypoint: ["ping"]
ports:
- 8081:8081

pong:
container_name: pong
build:
context: .
dockerfile: Dockerfile
environment:
ADDRESS: 0.0.0.0:8082
JAEGER_ADDRESS: jaeger:4318
entrypoint: ["pong"]
ports:
- 8082:8082

jaeger:
image: jaegertracing/all-in-one:1.53
container_name: jaeger
environment:
COLLECTOR_ZIPKIN_HOST_PORT: :9411
ports:
- 6831:6831/udp
- 6832:6832/udp
- 5778:5778
- 16686:16686
- 4317:4317
- 4318:4318
- 14250:14250
- 14268:14268
- 14269:14269
- 9411:9411

prometheus:
image: prom/prometheus:v2.49.1
container_name: prometheus
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- 9090:9090

grafana:
image: grafana/grafana:9.5.15
container_name: grafana
volumes:
- ./grafana/grafana.ini:/etc/grafana/grafana.ini
- ./grafana/provisioning/:/etc/grafana/provisioning/
ports:
- 3000:3000

This file is found in the root of the repository, and we need to launch all the services we rundocker-compose up

Test the services

Keep running as many requests as you want to generate some traffic (metrics & traces)

>> curl http://localhost:8081/ping
>> curl http://localhost:8082/pong

Metrics can be checked in Grafana — http://localhost:300

Traces/Spans can be examined in Jaeger — http://localhost:16686/

Conclusion

In this tutorial, we’ve embarked on a journey to enhance the observability of Go microservices using OpenTelemetry. From the initial setup to instrumenting code, propagating context, exporting traces and metrics, and visualizing data, we’ve explored the key aspects of integrating OpenTelemetry into your microservices architecture.!

Source code can be found at https://github.com/wesovilabs/getting-started-opentelemety-go

--

--