How to enable NGINX for distributed tracing

Ryan Burn
OpenTracing
Published in
3 min readJul 30, 2018

NGINX is a versatile and popular application. Perhaps best known as a web server, it can be used to serve static file content but is also commonly used together with other services as a component in a distributed system where it functions as a reverse proxy, load balancer, or API gateway; and is frequently configured to handle authentication, file uploading, request routing, SSL termination, or other tasks that can be offloaded from application services. OpenTracing enables observability into such distributed systems, and the nginx-opentracing module can be used to monitor requests as they pass through and are processed by NGINX, showing how they fit into the rest of the distributed architecture.

In this blog post, I’ll show how to set up NGINX for distributed tracing in a simple reverse proxy deployment.

Getting Started

For the example, NGINX proxies requests to a date service written in Go. Without tracing enabled, the configuration looks like this:

Figure 1: NGINX configuration without tracing enabled

To turn on OpenTracing, we need to load the nginx-opentracing module and configure a vendor tracer. nginx-opentracing works with any vendor providing an implementation of the OpenTracing C++ API.

Figure 2: NGINX configuration with tracing enabled

opentracing_load_tracer, above, is what sets up the vendor’s tracer (in this case Jaeger). The first argument points to the dynamic tracing plugin and the second provides a path to the tracer’s configuration file. This Jaeger configuration can be used to have every request sampled (though in an actual production scenario we may want to apply some sort of sampling).

Figure 3: Jaeger JSON configuration

Requests processed by NGINX will now start showing up in Jaeger.

Figure 4: NGINX trace in Jaeger

The top-level span above shows the time to process the entire request, while the child span represents the processing of the location = / block. This setup can be helpful if we only want to trace NGINX, but we can realize much more value when we also trace the services NGINX communicates with. To trace the date server, we can write an HTTP handler function like the following:

Figure 5: Traced backend service

We’ll also need to instruct NGINX to propagate the active span context to the backend so that the spans can be joined together by the collector. Adding opentracing_propagate_context like so to the / location block will accomplish that:

Figure 6: Location block with context propagation enabled

If we look in Jaeger, now, we’ll see spans for both NGINX and the Go backend, giving us a better view into the service.

Figure 7: NGINX trace in Jaeger with backend

See here for the complete example together with a docker-compose setup.

Conclusion

We showed how to set up the simplest of NGINX deployments for distributed tracing. Additionally, the nginx-opentracing module supports tracing FastCGI deployments and embedded Lua code; it works with LightStep, Zipkin, Jaeger, DataDog, and any other vendor offering an implementation of the OpenTracing C++ API. And it’s currently used by Kubernetes’ ingress-nginx controller and the nginMesh service mesh for Istio.

--

--