GraalVM 22.0 is here!

Alina Yurenko
Published in
9 min readJan 25, 2022


Today we are releasing GraalVM 22.0! It brings new features and improvements throughout all its components, and we’ll talk about the highlights in this blog post.

Get Updated

We are releasing GraalVM 22.0 for JDK 11 and JDK 17.

As always, you can download GraalVM Community from GitHub, and get the GraalVM Enterprise builds from OTN.

You can also install the latest version of GraalVM via the GraalVM Extension Pack for Java for VS Code and with Homebrew on macOS.

You can find out what’s new and watch demos in the 22.0 unboxing stream:

Now, let’s take a look at what’s inside!

Native Image

One of our key focus areas in Native Image is to improve the developer experience. We have significantly reduced image build times and memory usage over the past few releases. This work continued in 22.0, as we added updates to reduce image size: a more compressed encoding for stack frame metadata reduces image size for all images. In the Enterprise edition, this aspect was improved even further with an optimized implementation of String.format(), which makes localization classes unreachable for small images such as "Hello World", and significantly reduces the image size.

Another related update in this release is the new build output for Native Image. It breaks down the build process into stages such as initializing, performing analysis, and others and visualizes the current stage, shows code and heap breakdowns, statistics on reachability and other aspects of the build, RSS and CPU usage, and more. Here's an example:

New build output for Native Image

This feature is enabled by default, so take it for a spin and tell us what you think! If you rely on the previous build output, you can switch to it by using -H:-BuildOutputUseNewStyle. Please note that the previous output style is deprecated and will be removed in a future release.

Another update is related to the memory requirements of native executables. The new garbage collection policy for the Serial Garbage Collector (Serial GC), introduced in 21.3, is now enabled by default. It reduces the max RSS size of native executables by up to 30%. For a typical microservices workload such as Spring Petclinic, we measured peak throughput improvements of up to 23.22%.

Another feature introduced in 21.3, and extended in this release, is support for the Java Platform Module System. In particular, Native Image now supports the --add-reads and --add-modules options. Also, all module-related options such as --add-reads, --add-exports, and --add-opens are now applied before scanning the classpath/module-path. This ensures the modules are properly configured before class loading, which avoids class loading errors. Also, more information about modules is added to the image heap, which allows more module introspection at run time.

GraalVM Native Build Tools added several improvements over the past few releases, such as the improved integration with the native agent. You can get the latest release from GitHub.

Speaking of the latest releases and features, we added support for reflective introspection of sealed classes in JDK 17: Class.isSealed() and Class.getPermittedSubclasses(). Note that as part of this release, we removed the option -H:SubstitutionFiles=... to register substitutions via a JSON file.

Java and Compiler Updates

In this release, we significantly changed the way the GraalVM Enterprise compiler treats profiling information. The GraalVM compiler was designed as a highly-optimizing JIT compiler, which relies heavily on profiling information collected by the underlying runtime (HotSpot VM or Native Image). The profiles are used by the compiler to identify which code branches are executed most often, how frequently loops are executed, and which types are used in polymorphic code, to determine where to focus the optimization efforts. Therefore the quality of the profiles is critical to many optimizations such as inlining, duplication, and vectorization. In this release, we have included support for the compiler to automatically switch to an AOT (ahead-of-time) mode, where major optimizations can still do a reasonable job even in the absence of profiles. This helps Truffle languages which do not profile uncommon patterns that still can become hot, and in Native Image without profile-guided optimizations. With this optimization in place, we observe performance improvements in GraalVM Enterprise of up to 25% for loop and type-check heavy benchmarks that lack good profiles. This optimization is always enabled and cannot be disabled, as it is at the core of the compiler. It is transparent to the user and should provide general better code in the absence of precise branch and loop profiles.

A new loop rotation optimization now converts more uncounted loops to counted loops to benefit from optimizations such as vectorization and partial unrolling. For workloads containing a lot of non-counted loops with a similar shape, the loop rotation optimization brings performance improvements of up to 30%. This optimization is available in GraalVM Enterprise and is disabled by default in 22.0. To enable it, use the following flag: -Dgraal.LoopRotation=true. We plan to enable it by default in a future release.

In both Community and Enterprise editions, we added a new optimization to improve the Native Image performance of a type switch (i.e., a series of cascading instanceof branches). For example, consider the following code:

void foo(Object o) {
if (o instanceof A) {
// ...
} else if (o instanceof B) {
// ...
} else if (o instanceof C) {
// ...

In Native Image, the compiler graph for this used to be:

void foo(Object o) {
if (o != null && A.class.isAssignableFrom(o.getClass())) {
// ...
} else if (o != null && B.class.isAssignableFrom(o.getClass())) {
// ...
} else if (o != null && C.class.isAssignableFrom(o.getClass())) {
// ...

With the new optimization, the null check and load of o's class are factored out:

void foo(Object o) {
if (o != null) {
Object nonNullO = o;
Class oClass = nonNullO.getClass();
if (A.class.isAssignableFrom(oClass)) {
// ...
} else if (B.class.isAssignableFrom(oClass) {
// ...
} else if (C.class.isAssignableFrom(oClass)) {
// ...

One thing to keep in mind is that this optimization only pays off if there are subclasses of A, B, and C that Native Image sees as allocated. Otherwise, the instanceof tests are reduced to == comparisons on the class of o.

GitHub Action for GraalVM

Recently we released an official GitHub action for GraalVM that makes it easy to set up and use GraalVM Community Edition, Native Image, and Truffle languages and tools in your GitHub Actions workflows. Thanks to the community for providing feedback on this and for sharing their excitement on Twitter!

The new action exports $GRAALVM_HOME and sets $JAVA_HOME accordingly, so you can use GraalVM to build, test, and deploy your apps. The action also extends $PATH so you can directly invoke Truffle languages and tools, and it sets up build environments according to the selected components, allowing you, for example, to generate native images without any additional steps. Here's an example of how you can use the new action in your workflows:

Using GitHub action for GraalVM

For additional templates, a list of all features and options, and more information, check out the GitHub Action for GraalVM on the GitHub marketplace, and feel free to provide feedback or request features on the corresponding setup-graalvm repository.

Polyglot Runtime and Embedding

In GraalVM Enterprise, we introduced polyglot isolates, which along with a number of other features enable heap isolation between the host and guest applications. Using isolates improves the security, startup, and warmup time of Truffle languages. You can create an isolate for a Context (Engine) by calling Context.Builder.option("engine.SpawnIsolate", "true"). In this mode, calls between host and guest are more costly, as they need to cross a native boundary. We recommend you use the HostAccess.SCOPED policy with this mode to avoid strong cyclic references between host and guest. Note that this mode is experimental in this release and only supported for JavaScript; we plan to extend support in future releases.

We also enabled Auxiliary Engine Caching in JavaScript and Node in native mode. Engine Caching is intended to eliminate the warmup of programs running on Truffle languages, which comes from operations such as loading, parsing, profiling, and compilation. Within a single OS process, the work performed during warmup can be shared with in-process engine caching. Auxiliary engine caching builds upon this mechanism but adds the capability to persist a code snapshot with ASTs and optimized machine code to disk. This way, the warmup can be significantly reduced even for the first execution of the guest application.


In this release of the GraalVM runtime for JavaScript, we enabled ECMAScript 2022 mode by default. We implemented several new proposals, such as Intl.DisplayNames v2, Intl Locale Info, Intl.DateTimeFormat.prototype.formatRange, Extend TimeZoneName Option, Intl Enumeration API, which you can use with corresponding flags. Also, the Node.js runtime was updated to version 14.18.1.


We continue to work on the compatibility of the GraalVM Python runtime and extending module support. In this release, we added support for the pyexpat and _csv modules, and improved compatibility with the wheel and click PyPI packages.


This release adds support for Ruby 3.0 (see this GitHub issue). Most of the Ruby 3 changes are implemented in this release, with the exception of Ractor, parser changes, and keyword arguments changes.

We also added several optimizations to make the interpreter faster, improving application performance before the code is JIT-compiled. Just recently we published a blog post comparing the performance of TruffleRuby and other Ruby implementations, so if you are working with Ruby, make sure to check it out.


In this release, we kept working on compatibility and package support. We also adopted Truffle’s NodeLibrary, which provides access to guest language information associated with a particular Node location.

Java on Truffle

Exactly one year ago we introduced Java on Truffle. It’s come a long way since — we added many new features, improved startup and peak performance, and saw a lot of interest from the community.

We also added support for running native code with the GraalVM LLVM runtime. It can be enabled with the --java.NativeBackend=nfi-llvm experimental flag. Native JDK libraries can be installed with gu install espresso-llvm. When installed, those libraries will be picked up by the nfi-llvm native backend. This allows to bypass some limitations of the default native backend (nfi-dlmopen). In particular, it avoids crashes that can happen on some glibc versions when using multiple contexts.

Also, in this release we extended class redefinition functionality — in particular, we added support for changes to fields and class access modifiers.


Along with compatibility and performance work, the WebAssembly runtime adopted the new Truffle Frame API.

LLVM Runtime

There were several improvements and fixes that went into this release. In particular, loop count profiles are now also reported in first-tier compiled code, improving warmup by transitioning from first-tier to second-tier compilation sooner for loop-heavy methods.


In this release, we’ve made several changes to the developer experience of the GraalVM Extension Pack for Java for VS Code. In particular, we added the ability to manage GraalVM installations with SDKMan, provided a graphical UI for the change method signature refactoring, and added the Project View for Gradle and Maven projects.

Project explorer UI

You can now customize Run Configuration parameters such as program arguments, VM options, and environment variables. For more advanced scenarios, you can set configuration via launch.json.

Also, VisualVM added preliminary support for JDK 18 and exact thread state monitoring via JFR events.

Exact thread state monitoring in VisualVM

Truffle Language and Tool Implementations

An important update in this release is the introduction of sharing layers. A sharing layer is a set of language instances that share code within one or more polyglot contexts. In previous versions, language instances were shared individually whenever a new language context was created. Instead, language instances are now reused for a new context if and only if the entire layer can be shared. A layer can be shared if all initialized languages of the layer support the same context policy and their options are compatible.

Previously, different Truffle languages may have used different mechanisms for exiting. This wasn’t optimal, as a Truffle language had no way to detect and handle an exit triggered by a different language. As of 22.0, Truffle has support for polyglot context hard explicit exit, triggered by guest languages using TruffleContext.closeExited(Node,int). It provides a unified way for languages to trigger the exit of the underlying polyglot context. When triggered, all initialized guest languages are first notified using TruffleLanguage.exitContext(C,ExitMode,int), then all context threads are stopped, and finally, the context is closed. The hard explicit exit is simply referred to as “hard exit”.

New APIs were also added to and Learn more about these changes in the project changelog.


We are grateful to the community for all the feedback, suggestions, and contributions that went into this release. If you have additional feedback on this release, or suggestions for features that you would like to see in future features, please share it with us on Slack, GitHub, or Twitter.

— the GraalVM team



Alina Yurenko

I love all things tech & community. Developer Advocate for @graalvm, blog posts about programming, open source, and communities.