GraalVM 22.2: Smaller JDK size, improved memory usage, better library support, and more!

Alina Yurenko
graalvm
Published in
7 min readJul 26, 2022

Today we’re releasing GraalVM 22.2! This release brings new features and lots of improvements to the developer experience — let’s go through the highlights!

This release includes support for JDK 11 and JDK 17. As always, you can download GraalVM Community from GitHub, or get the GraalVM Enterprise builds from OTN. You can also install the latest version of GraalVM via the GraalVM Extension Pack for Java for VS Code or with Homebrew on macOS. For a detailed list of updates, check out the release notes. Now, let’s take a look at what’s new in this release!

Alternatively, you can watch our release stream replay:

Smaller GraalVM JDK distribution

One of the important aspects of a runtime is the size of its base distribution: the size of a runtime impacts download times and the developer experience. Starting with 22.2, the base GraalVM JDK is more modular and no longer includes the JavaScript runtime, the LLVM runtime, or VisualVM. To install those components, use gu install js, gu install llvm, or gu install visualvm in the same way that you already install Native Image, Python, Ruby, or other GraalVM components. This means the base GraalVM JDK download is much smaller. If you are using GraalVM to run Java applications on the JVM or using Native Image, there is no change to the way you set up GraalVM and run your applications, except for significantly smaller JDK downloads. Here is a table with a comparison of 22.1 versus 22.2 for the JDK 17 artifacts:

JDK size comparison: GraalVM 22.1 vs. 22.2

We hope that this change makes CI/CD setups more efficient and improves the developer experience for GraalVM users.

Making third-party libraries support Native Image

During the build process, Native Image compiles only the code that is reachable from your application’s main entry point. This approach optimizes for the best startup performance and usage of resources, but poses challenges for dynamic Java features, such as reflection and serialization. If some element of your application is not reachable, it won’t be included in the native executable. But this can lead to failures when the element is required and accessed via reflection at run time. To include elements that Native Image deems unreachable, you must provide Native Image with metadata (configuration information). The metadata can be automatically provided by a framework or created manually, for example, with the help of the tracing agent. To simplify this task further, we are introducing the GraalVM Reachability Metadata Repository — a centralized repository that library and framework maintainers (as well as Native Image users) can use to share metadata for Native Image. Read more about using and contributing metadata in a related blog post.

The metadata repository is integrated with the GraalVM Native Build Tools. For example, to enable automatic use of the metadata repository in a Gradle project, add the following to your build.gradle config:

graalvmNative {
metadataRepository {
enabled = true
}
}

The metadata repository will significantly simplify the use of third-party libraries in production Native Image applications and we welcome contributions.

Smaller memory footprint of Native Image builds

Thanks to several improvements in internal data structures, significantly less memory is required by Native Image when it builds a native executable. The reduction of memory usage is particularly beneficial in memory-constrained environments, such as cloud services and GitHub Actions. Starting with release 22.2, the Native Image tool can successfully build many larger native executables with only 2 GB of Java heap.

For example, the Spring PetClinic application now builds with 2 GB of memory:

Building a Spring PetClinic application in 2 GB of memory

Generating heap dumps in Native Image

Dumping the heap of native executables at runtime is now a supported feature in GraalVM Community. There are different ways to dump the heap, including -XX:+DumpHeapAndExit– a new command-line option that will dump the initial heap of a native executable. The heap dumps can be analyzed with Java heap dump tools such as VisualVM:

Analyzing heap dump of a Native Image application in VisualVM

Other updates in Native Image include:

  • Added support for Software Bill of Materials (SBOM) to GraalVM Enterprise Native Image. SBOM is a list of components used in a software artifact. GraalVM Native Image can now optionally include a SBOM into a native executable to aid vulnerability scanners. Currently we support the CycloneDX format, which you can enable by using the -H:IncludeSBOM=cyclonedx command-line option during compilation. After embedding the compressed SBOM into the executable, you can use the Native Image Inspection tool (available in GraalVM Enterprise) to extract the compressed SBOM with this command: $GRAALVM_HOME/bin/native-image-inspect — sbom <path_to_binary>.
  • Improved debugging support on Linux: The Dwarf information now contains parameters and local variables — thanks to Red Hat for this contribution! And check out the new experimental support for Native Image debugging in IntelliJ IDEA 2022.2 EAP 5. Stay tuned for more updates on this soon!

Compiler updates

Reduced memory usage in JIT mode. The Graal compiler now uses memory more efficiently at runtime. When an application warms up and reaches a stable state with few or no compilations needed, the Graal compiler releases the unused memory back to the system. Try it on your own workload to evaluate memory impact by running the following command: ps aux --sort --rss .

New strip mining optimization for counted loops. Strip mining converts a single long-running loop into a nested loop where the inner body runs for a bounded time. This enables putting a safepoint in the outer loop to reduce the overhead of safepoint polling. By choosing the right value for the outer loop stride, we ensure reasonable time-to-safepoint latency. The latter is particularly important for low-pause-time garbage collectors such as ZGC and Shenandoah. Look at the following example:

for (long i = init; i < limit; i += stride) {
use(i);
}

becomes

final long stripMax = (long) CountedStripMiningInnerLoopTrips;
for (long i = init; i < limit;) {
long innerTrips = i < limit - stripMax ? stripMax : limit - i;
long i_ = i;
for (long j = 0; j |<| innerTrips; j++) {
use(i_);
i_ += stride;
}
i = i_;
}

As a result, we see around 20% increase in speed on workloads that exercise long range checks, such as code using the foreign-memory access API. This optimization is also beneficial for Truffle languages. In 22.2 this optimization is experimental — enable it with the command-line option -Dgraal.StripMineCountedLoops=true. We would appreciate your feedback and performance reports, and plan to enable this optimization by default in 22.3.

Another new optimization is global value numbering for fixed nodes early in the compilation pipeline. This optimization can improve workloads that require complex partial escape analysis and unrolling optimizations in order to optimize away constant loops with complex object allocations (as seen for example in some Ruby workloads). This optimization is also potentially very beneficial for Native Image, as it can speed up build time (by reducing graph sizes earlier in the compilation pipeline) and accelerate the generated native executables themselves by folding more memory operations. In this release, it’s disabled by default — give it a try with the command-line option -Dgraal.EarlyGVN=true.

GraalVM Enterprise for Apple Silicon and new components for GraalVM Community

Apple Silicon users can now develop applications using GraalVM Enterprise! You can get the builds here. Support for Apple silicon was one of the most requested features on GraalVM’s GitHub, and now GraalVM Enterprise users can benefit from it too. Note that support is experimental in this release. We would appreciate your issue reports and feedback.

In addition to the JDK and Native Image, we also added Apple Silicon support for a number of components in both GraalVM Community and Enterprise Edition:

  • JavaScript
  • the LLVM toolchain and runtime
  • Ruby
  • Java on Truffle
  • Webassembly

GraalPython: faster startup and extended library support

We added an experimental bytecode interpreter to GraalPython for faster startup and better interpreter performance. It’s not enabled by default in this release — enable it by using the command-line option --python.EnableBytecodeInterpreter. Additionally, we updated to HPy version 0.0.4, which adds support for the (completed) HPy port of Kiwi, and the in-progress ports of Matplotlib and NumPy.

Improved interoperability in GraalJS

Starting with 22.2, objects from other languages are assigned a proper JavaScript prototype by default. This feature, previously experimental, increases the portability of code by letting foreign objects appear as arrays, functions, and other types from JavaScript. Expect more details in a follow-up blog post.

Community Contributions

As always we are grateful to the community and our partners in the ecosystem for working with us on this release. There were many helpful reports and contributions on GitHub, and by sharing feedback and helping other community members on GraalVM’s platforms, we can collectively make GraalVM better for everyone. In particular:

  • Improved debugging support on Linux was contributed by Red Hat;
  • The metadata repository was built as a collaboration between the GraalVM, Micronaut, Quarkus, and the Spring Boot teams.

Conclusion

Thanks to the GraalVM community for all the feedback, suggestions, and contributions that went into this release. If you have additional feedback on this release, or suggestions for features that you would like to see in future releases, please share them with us on Slack, GitHub, or Twitter.

— the GraalVM team

--

--

Alina Yurenko
graalvm

I love all things tech & community. Developer Advocate for @graalvm, blog posts about programming, open source, and devrel.