Episode 1: “The Evolution” — Java JIT Hotspot & C2 Compilers (Building “Super Optimum Microservices Architecture” Series)

A B Vijay Kumar
Sep 5 · 6 min read
Image for post
Image for post

In my search for the most optimum container tech, I have been playing around with various combinations of OpenSource and frameworks.

In this blog, I will walk you through what, I think, is one of the most optimum container stack.

Before I dig into the stack, let me spend some time, walking through what are some of the non-functional requirements of a Container & Serverless/FaaS based MicroServices Architecture

IMHO, the following are some of the key requirements

Smaller Footprint: Eventually all of these MicroServices are going to run on the cloud, where we “pay for what we use”…What we need is a runtime that has a smaller footprint and runs with optimum CPU cycles, so that we can run more on less infrastructure

Quicker bootstrap: Scalability is one of the most important aspects of container-based MicroServices architecture. So the faster the container bootup, the faster it can scale the clusters. This is even more important for Serverless architectures.

Built on Open Standard: It's important that we have the underlying platform/runtime built on open standards, as it's easy for me to port or run the workloads in a hybrid multi-cloud world!!, and avoid vendor lock-in.

Faster Build time: In this agile world, where we roll out fixes/features/updates very frequently, it's important that the build and rollouts are quicker…including real-time deployments of the changes (during development time, to test, as we develop)

Let's park these requirements for sometime…let me go down the stack, to the foundational elements and work my way up the stack, to build (what I believe is) the most optimum container platform, that would deliver the above requirements.

Since there is lot to go through, I have divided into 4 episodes.

Episode 1: “The Evolution” — Java JIT Hotspot & C2 compilers (the current episode…scroll down)

Episode 2: “The Holy Grail” — GraalVM

In this blog, I will talk about how GraalVM embraces polyglot, providing interoperability between various programming languages. I will then cover how it extends from Hotspot, and provides faster execution, and smaller footprints with “Ahead-of-time” compilations & other optimisations

Episode 3: “The Leapstep” — Quarkus+CRI-O

In this blog, I will talk about how Quarkus takes a leap-step, and provides fastest, smallest and the best developer experience in building Java MicroServices. I will also introduce CRI-O, and how it brings its ecosystem of tools.

Episode 4: “The Final Showdown” — Full stack MicroServices/Serverless Architecture

In this blog, I will put all the pieces together and talk about how they build a robust, scalable, fast, thin MicroServices Architecture.

I hope you will enjoy this series…

Episode 1: “The Evolution”

With Java, we achieved the “write-once-run-anywhere” dream, in the early 90s. The approach was very simple. The Java programs are compiled to “byte-code”

Interesting fact: byte-code is called byte-code, as each instruction in byte-code is of byte length, so that it can be loaded into the CPU cache, and in fact there were also java CPUs built!!! didn’t take-off

We have JVM implementations, for each supported operating system. The respective JVM will “interpret” the byte-code to machine instruction (using something like a map). Obviously, this is slow, as the interpreter goes one statement at a time!!!

To speed up this, it makes sense to identify the code, that is run more commonly, and compile them ahead of time, and cache it 🤔.

That is exactly, what later versions of JVMs started doing. A performance counter was introduced, that counted the number of times a particular method/snippets of code is executed. Once a method/code snippet is used to a particular number of times (threshold), then that particular code snippet, is compiled, optimised & cached, by “C1 compiler”. Next time, that code snippet is called, it directly executes the compiled machine instructions from the cache, rather than going through the interpreter. This brought in the first level of optimisation.

While the code is getting executed, the JVM will perform runtime code profiling, and come up with code paths and hotspots. It then runs the “C2 compiler”, to further optimize the hot code paths…and hence the name “Hotspot”

C1 is faster, and good for short-running applications, while C2 is slower and heavy, but is ideal for long-running processes like daemons, servers etc, the code performs better over the time.

In Java 6, we have an option to use either C1 or C2 methods (with a command-line argument -client (for C1), -server (for C2)), in Java 7, we could use both, and from Java 8 onwards it became default behavior.

The below diagram illustrates the flow…

Image for post
Image for post

Here are some of the code optimization, that the JVM compiler

  • Removing null checks (for the variable that are never null)
  • Inlining smaller, most called methods (small methods) reducing the method calls
  • Optimizing the loops, by combining, unrolling & inversions
  • Removing the code that is never called (Dead code)

and many more…

Whatever said and done, JIT (Just-In-time compilation) is slow, as there is a lot of work that the JVM has to do in the runtime.

Ahead-of-Time compilation option was introduced since Java 9, where u can generate the final machine code, directly using jaotc

This code is compiled to a target architecture, so it is not portable…in X86, we can have both Java bytecode and AOT compiled code, working together.

The bytecode will go through the approach, that I explained previously (C1, C2) while the AOT compiled code directly goes and sits in the code cache, reducing the load on JVM. Typically the most frequently used libraries can be AOT compiled, for faster responses.

Image for post
Image for post

This is the story of Java VM…and pretty much every language has a similar story, where it goes thru the similar inception and over a period of time, the compiler/VM gets optimised to run faster

In the next episode, we will look at how GraalVM, takes this further, by reducing the footprint, optimising the execution and bring in support for polyglot/multi language interoperability.

The Holy Grail

You can read the blog here

Episode 2: GraalVM — “The Holy Grail”

Image for post
Image for post

👋 Join FAUN today and receive similar stories each week in your inbox! Get your weekly dose of the must-read tech stories, news, and tutorials.

Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬

Image for post
Image for post

FAUN

The Must-Read Publication for Creative Developers & DevOps Enthusiasts

By FAUN

Medium’s largest and most followed independent DevOps publication. Join thousands of aspiring developers and DevOps enthusiasts Take a look

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

A B Vijay Kumar

Written by

IBM Distinguished Engineer, Master Inventor, Mobile, RPi & Cloud Programmer

FAUN

FAUN

The Must-Read Publication for Creative Developers & DevOps Enthusiasts. Medium’s largest DevOps publication.

A B Vijay Kumar

Written by

IBM Distinguished Engineer, Master Inventor, Mobile, RPi & Cloud Programmer

FAUN

FAUN

The Must-Read Publication for Creative Developers & DevOps Enthusiasts. Medium’s largest DevOps publication.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface.

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox.

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic.

Get the Medium app