MicroProfile-OpenTracing with Supersonic Subatomic Quarkus

In this article we will demonstrate some of the tracing features of the MicroProfile-OpenTracing project while evaluating performance of new Java runtime Quarkus. You will also learn how a Java application can be compiled to native code for supersonic performance!

First things first, MicroProfile is a project for building modern cloud-native Java applications. MicroProfile-OpenTracing provides tracing capabilities for MicroProfile technologies by integrating with the OpenTracing project.

The Quarkus project is a modern cloud-native Java runtime, which can be compiled as a native Linux executable. How cool is that? It promises a quick startup time, small memory footprint and generally better performance. The native mode is based on GraalVM and given its performance characteristics it is a great candidate for cloud-native deployments or functions as a service (FaaS). In this blog post we will only use REST response times as performance indicators.

Example application

The demo application is a simple JAX-RS service simulating a conversation between services. The source code is hosted on https://github.com/pavolloffay/quarkus-tracing. The repository contains all necessary instructions on how to compile and run it. We will just quickly go through some interesting parts of the project.

  • application.properties contains all configuration. This is the place where we can configure the URL to the Jaeger server or the application name.
  • GreetingResource defines JAX-RS service endpoints.
  • GreetingService is an interface used for MicroProfile Rest Client.
  • ConversationService is a CDI bean which uses GreetingSerice REST client to call endpoints from GreetingResource . This allows us to model inter-process communication in a single deployment.

There are a couple of “tracing” things going on:

  • @Traced annotation is used to enable tracing on a CDI bean — ConversationService.
  • GreetingService is a REST client interface which is automatically traced.
  • MicroProfile-OpenTracing enables tracing of all JAX-RS endpoints automatically.

Performance comparison

First, we are going to run the app in classical JVM mode like any other Java application. There are two options how the app can be executed: use Maven to package and Java to run the app or use Quarkus plugin which supports hot-reload (mvn compile quarkus:dev). We don’t need to be changing classes during testing so we will just compile and run it with java command:

./mvnw clean package
java -jar target/tracing-example-1.0-SNAPSHOT-runner.jar
Jaeger screenshot showing traces when running in JVM mode.

The screenshot shows traces for the /hello and /conversation endpoints. The invocation time of the/hello endpoint is about 0.27 ms and /conversation is on average about 11.8 ms. The latter time is higher because it consists of two internal REST calls ( /conversation calls /hello and /bonjour ).

Now let’s try to compile to a native mode and see how response times change.

mvn package -Pnative
./target/tracing-example-1.0-SNAPSHOT-runner
Jaeger screenshot showing traces when running in a native mode.

The response time for the/hello endpoint is now only 0.1 ms! The /conversation endpoint is on average about 3 ms. As we are using instrumentation which runs inside the process we are able to look into each individual request/trace and see if there are any interesting patterns. Let’s compare a trace for JVM and native mode:

Jaeger screenshot showing a trace when running in JVM mode.
Jaeger screenshot showing a trace when running in a native mode.

The timing patterns on both screenshots look very similar except the duration difference of each individual span.

Conclusion

We have demonstrated that OpenTracing instrumentation can be transparently used to measure the performance of REST endpoints and it also allows us to go one level deeper and compare the execution time of internal components of the application.

The response times in the native mode dropped from 0.27 ms to 0.1 ms for the single request and 11.8 ms to 3 ms for the chaining request.