Today we’re happy to announce the GraalVM 20.3.0 release. This is the final feature release of 2020 and if you’re running GraalVM 20.x we recommend taking a look and updating to 20.3. It is also the first long-term support (LTS) release for GraalVM Enterprise. If you’re running one of the 19.x releases or earlier 20.x releases please consider upgrading to 20.3.0 in the near future.
As with every release, we’re greatly thankful to the wonderful GraalVM community for all the feedback, for the collaboration and the discussions of the issues, for sending pull requests and updates to the docs, and last but not least, for spreading the word about the GraalVM project and how it helps your project. Together we make the GraalVM ecosystem extraordinary!
You can download GraalVM 20.3.0 right now:
In this article we want to highlight some of the most notable changes in GraalVM 20.3.0. There are many components in GraalVM and every release brings improvements and fixes to all of them. So for a more detailed list of changes please refer to the release notes.
The GraalVM compiler is at the heart of the GraalVM project: it allows GraalVM to show the best results for Java benchmarks; it’s used for the ahead-of-time compilation when building native images; it’s used to optimize the performance of languages running on the Truffle language implementation framework. So all updates to the compiler are extremely important and impact almost all projects in the GraalVM ecosystem.
GraalVM 20.3 brought a number of noteworthy improvements to the compiler. 20.3.0 enhanced the heuristics that help simulation-based loop peeling in GraalVM Enterprise specifically for the cases where objects in the array are lazily initialized in the first iteration of the loop and used later. Because of this, some Python microbenchmarks saw performance improvement up to 40%. Speaking of arrays, the code generation for initializing newly allocated arrays improved and we corrected the corner cases where certain large array allocations (8–128MB large) were being initialized twice. Another particularly interesting change is in the area of the advanced code duplication optimization in GraalVM Enterprise 20.3.0. This release addresses several misclassifications in the heuristic for duplication. The result is a peak performance improvement of up to 17% on some LLVM bitcode workloads and a 5% increase on relevant Java workloads.
Introduced in a previous release,
libgraal dramatically improved the compilation speed of the GraalVM compiler. However, a small amount of class loading was still performed during its initialization. This was visible in the warmup curves of micro benchmarks that had very short iterations. For example, on the
NBody benchmark from the "Are We Fast Yet?" benchmark suite, the first 2 benchmark iterations took ~110 milliseconds on
libgraal compared to 45 milliseconds on C2. This class loading has now been eliminated from libgraal or delayed until a point after libgraal initialization. A secondary benefit of eliminating this class loading is that it fixed problems related to VM assumptions about the JIT compiler not loading classes.
Individual optimizations and better heuristics don’t always improve all workloads, often they are only marginally better for some. But the continuous addition of small incremental changes does pay off in the long run. Here’s a graph illustrating the progress of 20.3.0 over last year’s 19.3.0 release:
Note that last year’s performance was the state-of-the-art for these workloads showing world-class results. The fact that 20.3 improves on these result is what makes the performance even more impressive.
GraalVM Native Image is a very versatile technology supporting cloud deployments, CLI tools, and deployments on embedded and mobile devices.
In 20.3.0 Native Image improved its container awareness: on Linux, resource limits like the processor count and available memory size are read from cgroup
V2 configurations. The processor count can also be overridden on the command line using the option
The G1 garbage collector available in GraalVM Enterprise with the Java HotSpot VM is now also supported in native executables generated by Native Image. G1 now also supports performance counters when the image is built with -H:+AllowVMInspection. It can be enabled at build time using the new simplified and more descriptive option
Isolated compilation is now available in the Community Edition and separates Truffle applications and the runtime compiler from each other, which improves performance by reducing interference between them such as with garbage collection.
The options to enable and disable assertions now support the full syntax to specify package and class names. Assertions need to be configured at image build time, using the options
-dsa. The definition of "system assertions" is expanded to not only include assertions in the JDK, but also in the Substrate VM runtime system, for example the garbage collector.
This release also brings many small performance and memory footprint optimizations. In particular, a new implementation of type checks improves the performance of
Class.isAssignableFrom with less type data in the image heap.
Thanks to a collaborative community effort, basic debug info generation is now also available on Windows.
As in every release, we fixed a lot of issues reported on GitHub. One notable example, is the issue where some native image executables were not compressing correctly with the UPX utility. The ratio of compression varies based on the particular project, but it often leads to 60–70% smaller executables for microservice application examples.
Language Implementation Framework (Truffle)
The Truffle Language Implementation Framework underpins all the non-JVM native languages running on GraalVM. In 20.3.0 one of the major themes for Truffle was a focus on improving warmup: how quickly programs become fast. There were three most significant changes that address the warmup. 20.3.0 enabled by default the elastic allocation of Truffle compiler threads depending on the number of available processors. The previous behavior of using 1 or 2 compiler threads, can be explicitly enabled with
If you are embedding these languages, for example in a Java application, there are important improvements in 20.3.0. For example, we added the
Context.interrupt(Duration) API to interrupt a polyglot
Context execution. The interrupt is non-destructive meaning that the polyglot Context can still be used for further execution.
And GraalVM Enterprise introduced experimental sandbox resource limits. Using the options like:
sandbox.MaxStatements=<long>to limit the maximum number of statements.
sandbox.MaxCPUTime=1000msto limit the total maximum CPU time that is spent running the application.
sandbox.MaxThreads=<int>- to limit the number of threads that can be concurrently used by a context.
We’ll explore the sandboxing limits in more detail in an upcoming article specifically about it.
The supported Node.js version in GraalVM 20.3.0 was updated to 12.18.4.
- enabling low precedence lossy number, string-to-boolean, and number-to-boolean conversions.
- fixing the field, getter, and setter access order to mimic Nashorn.
In every release, TruffleRuby comes with a ton of compatibility fixes and performance improvements.
On top of the changes focusing on warmup improvements that Truffle brings to all languages, Ruby in 20.3.0 includes various warmup improvements, notably much less splitting and more operations are being done inline, without a call. This should result in reaching peak performance faster.
The compatibility of GraalVM’s Python implementation continues to improve. The 20.3.0 release contains many fixes to pass the unit tests of standard library types and modules:
bytearray, subclassing of special descriptors, type layouts,
float, generators, modules, argument passing corner cases, string literals and encodings,
operator, numeric tower,
On top of that, the Python version was updated to 3.8.5. As it is becoming more well known in the community, the standalone version of
graalpython can now be installed through
LLVM bitcode runtime
Besides performance improvements and bug fixes, GraalVM 20.3.0 now supports code sharing in the GraalVM LLVM runtime. This allows the AST and compiled code of common bitcode libraries to be shared between multiple contexts within a single engine.
The LLVM toolchain was also updated to version 10.0.0.
A major improvement in tooling for GraalVM in 20.3.0 is the new VSCode GraalVM extension for Java. It includes a lot of functionality that simplifies working with Java projects. For example:
- Java syntax highlighting
- Java code completion
- Integrated Java debugger
- Integrated Polyglot debugger for GraalVM languages
There’s also a new VS Code Micronaut extension which leverages the features of the GraalVM extension for Java but also simplifies working with Micronaut apps, specifically helping to create new Micronaut projects and building native images of the Micronaut applications.
What do you want to see in GraalVM?
With every release we introduce new features that can make your applications run faster, open new opportunities and make the development process more efficient and fun. There are many things on our project roadmap, and we would love to hear from you which would be the most useful, and what is still missing. Respond to this survey to let us know how we should improve and extend GraalVM. Feel free to also share it with your team or community — having more opinions will help us to make GraalVM better for everyone.
Note that these are only some of the most notable improvements in the GraalVM 20.3.0! Please read a more detailed outline of the new and noteworthy features in the release notes. For the components you’re most interested in the changelogs are an invaluable source of information!
— GraalVM Team