GraalVM 21.2 with lots of native image usability improvements

Oleg Šelajev
Jul 20 · 10 min read

GraalVM 21.2 is released today and in this article we focus on the most visible, important, and exciting updates available in this version.

First things first, GraalVM 21.2 builds are available for download from the usual locations:

GraalVM consists of several components, and with every release we work hard on bringing improvements to them all. So if you’re interested in a particular component and would like to get a more detailed overview of the changes, please consult the documentation.

Native Image

Let’s start with looking at new and noteworthy in Native Image. Back in June we released the new Gradle and Maven plugins for Native Image with initial JUnit 5 testing support. It simplifies building native images of your applications and allows to run JUnit tests in native image mode to check how your code works there.

Since the initial release there have been two minor ones with various bug fixes, improvements, and cleanups, so if you maintain an application or library that works in Native Image, consider using the two run tests and verify behavior in Native Image!

Another quality of life improvement is that Native Image now automatically removes unnecessary security providers from the image: the reachable security providers are detected by the static analysis. This means the options like --enable-all-security-services are deprecated and will be removed in the future. It is possible to disable the automatic detection completely using -H:-EnableSecurityServicesFeature and register the providers manually as well. You can read more about it in the docs.

One very welcome addition that landed in 21.2 is the implementation of the class pre-definition to support ClassLoader.loadClass calls at run time. Desired classes that need to be loaded at run time must be made available to the static analysis at build time so that they are included in the closed world analysis, but otherwise the code patterns that include loading classes at arbitrary moments of run time are now working in native images just as you'd expected them to.

Another interesting infrastructure enhancement made to GraalVM 21.2 is that native images built with -H:+AllowVMInspection now support JFR events that are written in Java. To record JFR events at run time, JFR support and JFR recording must be enabled with the command line options -XX:+FlightRecorder and -XX:StartFlightRecording. There aren't that many events actually implemented, but the architecture for implementing them or emitting them from the application code is available.

You can try the following example to see how your custom events could look in practice:

Build it into the native image, run with -XX:+FlightRecorder -XX:StartFlightRecording=”filename=recording.jfr”

And you can see how the event looks like in, for example, VisualVM:

Compiler Updates

Every update brings incremental improvements and new optimizations in the compiler which result in gains on some workloads. Over time these gains accumulate and result in significant performance gains. If you have not updated recently, now is a good time to leverage all those improvements. In 21.2 there are a number of interesting optimizations added to the compiler. Let’s start with the ones available in GraalVM Enterprise, available as part of an Oracle Java SE Subscription.

We improved loop limit analysis for counted loops, so the compiler also analyzes the control flow preceding the loop to reason about the induction variables. This can make more uncounted loops, like the example one below, amendable for advanced optimizations.

long i = 0;
if (end < 1) return;
do {
// body
} while (i != end);

Here the compiler can combine the info from before the loop that i is 0 and end ≥ 1 to prove that end > i and use that data for optimizing the loop body.

We also added a novel Strip Mining optimization for non-counted loops, which splits loop iteration into parts, making them easier to optimize later. It is is disabled per default; enable it with the -Dgraal.StripMineNonCountedLoops=true option.

We improved compilation of the code which uses typical StringBuilder patterns, and enhanced support for these patterns in JDK11 based GraalVM builds by making it aware of the compact strings in JDK11.

Then in terms of GraalVM Community Edition, one notable improvement to the compiler is the addition of the Speculative Guard Movement optimization, which tries to move a loop invariant guard, for example, an array bounds check, from inside a loop to outside of the loop. This can improve relevant workloads dramatically.

Also, we improved safe-point elimination mechanisms in long counted loops. The compiler can now eliminate safe-points in loops that have a long induction variable where it can statically prove that the range of the induction variable is Integer.MAX_VALUE - Integer.MIN_VALUE. Consider this for-loop example:

for (long i = Integer.MIN_VALUE; I < Integer.MAX_VALUE; I++) { 
// body

The compiler can now statically prove i only iterates within integer range even though it is a long. As a result, the integer overflow situation doesn't need to be accounted for and the loop optimizes better.

On top of these there are a few optimizations available in this release not yet enabled by default and considered experimental. One experimental optimization called Write Sinking tries to move writes out of loops. You can enable it in GraalVM Enterprise with -Dgraal.OptWriteMotion=true. Another optimization available in GraalVM Enterprise, but not yet enabled by default is a novel SIMD (Single Instruction Multiple Data) vectorization optimization for sequential code. You can experiment with it by using the -Dgraal.VectorizeSIMD=true option. If you're not ready to experiment yourself, stay tuned — in an upcoming standalone article we're going to explore in detail what it gives you, and how your code can benefit from it.

Polyglot Runtime & Truffle framework

Truffle received a new compilation queuing heuristic, enabled by default. This new heuristic improves the warm-up time of the polyglot runtime on many workloads.

Here’s a brief explanation what it does: imagine the following synthetic piece of JavaScript code is your application. It runs 2 functions in a loop, both of which reach the threshold for the JIT compilation and need to be compiled:

In the previous versions of GraalVM, the algorithm to prioritize compilation worked in the first-come, first-served order. This would result in compiling the lowUsagefunction first, instead of the more beneficial order with highUsage being compiled first.

Now in GraalVM 21.2, the policy is more sophisticated than just the hotness and takes into account the multi-tier settings, compiling for the first tier faster, compilation history and deoptimizations prioritising code previously already determined to be important, and the weighted combination of code hotness and being in active use at the same time. All in all this should result in better warmup for all GraalVM languages.

There are additional configuration options to tweak the heuristic or disable its use completely. Learn more about it in the docs.

Another change is that certain API calls when embedding a polyglot context require a new version of JVMCI, the compiler interface used to plug the compiler to the Java HotSpot VM. The GraalVM distributions for all JDK versions contain the updated JVMCI, but if you’re using GraalVM languages with a different JDK and enable the GraalVM compiler on the module path, make sure to use one of the following JDK versions that include JDK-8264016 for full compatibility.

Otherwise using forced context cancellation (Context.close(true)) or interrupting (Context.interrupt(Duration)) will throw errors. For the best experience, consider using the GraalVM distributions for running your applications or avoid using these APIs. Other possible workarounds are described in the notes.

And since we’re talking about Truffle-related changes, there are some exciting updates in GraalVM 21.2 for projects implementing languages and tools on GraalVM. Truffle libraries can now be prepared for ahead-of-time compilation without prior execution. See the docs for ExportLibrary.useForAOT and the AOT Tutorial for more details.


JavaScript implementation continues to add and update implementations of the most recent proposals like the New Set Methods, experimental operator overloading support, or the RegExp Match Indices proposal. You can find the details on how to enable these in the release notes.

On top of that, Graal.js enhances the development experience with an option to track unhandled promise rejections in a polyglot Context. By default, the option is set to none, and unhandled promise rejections are not tracked, but you can configure and make your development life easier with the js.unhandled-rejections option.


As always Ruby experienced a continuous, ongoing stream of compatibility and performance improvements. One very impactful added feature is the precise invalidation for Ruby methods and constants, by using per-name and per-class assumptions. When loading Ruby code, methods are added one by one due to Ruby semantics, and a lot of code is executed while loading files. With per-class assumptions (which is how it behaves in other Ruby implementations and in TruffleRuby prior to this change), it would invalidate all call sites that would call a method of that class. With precise invalidation, it only invalidates call sites that would actually need to call another method for the next call. This improves warmup on the real world code.

Along with these changes, we updated to Ruby 2.7.3, with the exception of resolvstdlib which was not updated (resolv in 2.7.3 has bugs). We also added a new TruffleRuby::ConcurrentMap data structure for use in concurrent-ruby.

For lots of other changes and fixes, check the release notes.


In this release we implemented _pickle as a faster version than the pure Python version, making serialization faster.

Improved support for interoperability with other languages for the dict type to work as you'd expect it to work, which is now using Truffle hash implementation.

As always a lot of work focused on performance improvements especially during warmup and in shared engine configurations!

And there are lots of compatibility improvements too. Check release notes or more details.

LLVM bitcode runtime

There are some C++ related improvements in 21.2. We fixed the issue of the LLVM toolchain not working correctly for C++ on MacOS 11.3. We also added support for C++ virtual calls via cross-language interoperability.

The managed mode in GraalVM Enterprise also got better. We updated musl libcto to version 1.2.2, and improved compatibility with the existing native code bases by adding support for pthreads and pthread synchronization primitives in managed mode.


FastR continues to work on compatibility, improving its support for packages in 2021–02–01 CRAN snapshot:

  • testthat 3.0.1 is partially supported.
  • tibble 3.0.6 , vctrs 0.3.6, and data.table 1.13.6 are mostly supported.
  • support for dplyr 1.0.3, ggplot 3.3.3, and knitr 1.31 is a work in progress.

If you’re interested in support for a particular package, please reach out!


The WebAssembly runtime saw lots of compatibility improvements and some bug fixes to pass tests for as many NPM modules that use WebAssembly as currently possible.

Java on Truffle

One of the important things in GraalVM 21.2 for Java on Truffle is a new Hotswap Plugin API, which allows to reload the code without the need for restarting a running application. The API is intended to allow framework developers to have better control over reflecting changes in the application in response to source code edits in your IDE. It’s achieved by setting up appropriate hooks where the main design principle is that you can register various HotSwap listeners that will be fired on specified HotSwap events. For example, it is able to re-run a static initializer, run a generic post HotSwap callback or hooks when implementations for a certain service provider changes.

The most amazing thing about it is that the API consists of normal Java calls — much easier to integrate and maintain than any bytecode manipulation-based solutions.

For more details, check the documentation on the plugin API.

Another very welcome improvement made to 21.2 is the better bytecode dispatch which speeds up the interpreter used in Java on Truffle about 15–30%. Interpreter speed is a very important factor in how fast your application warms up.

21.2 also saw improvements to interoperability for Java on Truffle objects implementing Map, Map.Entry, List, Iterator, or Iterable with the other languages including the Java host code.


VisualVM got a number of excellent improvements, including support for JDK 17 and allowing functionality to be controlled from command line or other external tools. The latter enables a smooth integration between VisualVM and VSCode with the GraalVM extension installed. You can command VisualVM to profile your application right from the IDE.

VisualVM now can also save a JFR recording from a live Java process, and has a new Lock Contention view in the Profiler tab. So if you’re using VisualVM for your day-to-day profiling needs now it’s more powerful than ever.


Last but not least, we restructured the documentation for GraalVM that ends up on the website. It’s now available in the main project repository on GitHub.

This means that it’s easier than ever to contribute to GraalVM and improve the lives of many other developers in the ecosystem. If you notice docs that could use clarification or incompletely describe your favorite features or tuning options, please consider sharing your expertise and helping the project with a pull request! If in doubt, here’s a short guide on how to do that: contributing to the docs.

Note, that the GraalVM Enterprise documentation is available on the site.

As with every release, we want to thank the GraalVM community for all the collaboration, feedback, and discussions, and for sharing your stories about how you use GraalVM.

As we mentioned before, the downloads are available, as always, on the website.

Please don’t hesitate to let us know of any and all feedback! There are several channels available to reach the team, pick whichever works for you best: Slack, GitHub, or Twitter, or send us an email.

— GraalVM Team


GraalVM team blog -