graalvm

GraalVM team blog - https://www.graalvm.org

Welcome, GraalVM for JDK 24!🚀

--

Today we are releasing GraalVM for JDK 24!

As always, we release GraalVM on the same day that Java 24 is released, so you can use GraalVM as your Java 24 JDK.

You can already download GraalVM and check the release notes for more details. Keep reading this blog post to see what’s new in this release!

Alternatively, watch our release stream for the updates in this release and demos:

More Peak Performance with Machine Learning 🙌

You might know about ML-based profile inference in Native Image: in the absence of user-supplied profiling information, Native Image in Oracle GraalVM uses a pre-trained ML model to predict the execution probabilities of the control flow graph branches. This enables powerful optimizations, improving the peak performance of native images out of the box. In this release, we are introducing a new generation of ML-enabled profile inference — GraalNN. With GraalNN, we are observing a ~7.9% peak performance improvement on average on a wide range of microservices benchmarks (Micronaut, Spring, Quarkus, and others).

The impact of ML-based profile inference in Native Image. The benchmark is running a Micronaut helloworld application with ML-enabled PGO comparing to baseline (no profiling information)

To enable this optimization, pass the -O3(optimize for peak performance) flag to Oracle GraalVM Native Image. Note that this approach doesn’t require a training run and a follow-up build — you can get improved peak throughput with just one build.
For more implementation details, you can see our paper “GraalNN: Context-Sensitive Static Profiling with Graph Neural Networks”, which we will present at this year’s International Symposium on Code Generation and Optimization (CGO). We are working on further improving this optimization, so expect even higher performance in GraalVM for JDK 25!

Even Smaller Native Executables 📦

Points-to analysis is a crucial part of the GraalVM Native Image build process. While Java projects might contain tens of thousands of classes coming from dependencies, we can avoid compiling all the methods by analyzing which classes, methods, and fields that are reachable, i.e., actually needed at runtime.

In this release we are introducing SkipFlow — an extension of our points-to analysis that tracks primitive values and evaluates branching conditions during the run of the analysis. It allows us to produce ~6.35% smaller binaries without increasing the build time. In fact, image builds tend to be even slightly faster with SkipFlow enabled because there are fewer methods to analyze and compile.

The impact of the SkipFlow optimization on the size of native executables

You can find more details about this optimization in our CGO paper, “SkipFlow: Improving the Precision of Points-to Analysis using Primitive Values and Predicate Edges”, and a follow-up blog post that we will share soon.

This optimization is included in GraalVM 24 as experimental and not yet enabled by default. To test it out, you can use the flags -H:+TrackPrimitiveValues and -H:+UsePredicates. Feel free to share your feedback with us, as we plan to enable it by default in GraalVM for JDK 25.

Premain Support for Java Agents 🥷

One of the common requests from both our users and partners was to extend the Java agents support in Native Image. Up to now, agents have been supported by Native Image but with some constraints:

  • The agent had to run and transform all classes at build time;
  • The premain of the agent was only executed at build time;
  • All of the classes needed to be present in the agent jar;
  • The agent should not have manipulated the classes used by Native Image.

With this release, we are taking the first step towards agent support at runtime. We have added support for premain for static agents, and it currently it works as follows:

  • At compile time, use -H:PremainClasses= to set the premain classes;
  • At run time, use -XX-premain:[class]:[options] to set premain runtime options are set along with main class’s arguments.

This now allows premain methods to be initialized when the native image actually runs.

We would like to thank Alibaba for their contributions to this feature.

We have more work planned in GraalVM for JDK 25. In the meantime, you can help us by telling us which agents specifically you would like to use with Native Image — let us know in the GitHub ticket or via our community platforms.

Vector API support in Native Image 🚀

The Vector API enables vector computations that reliably compile to optimal vector instructions, resulting in performance superior to equivalent scalar computations.

In this release, we have continued our work to optimize more Vector API operations on GraalVM, with more operations now efficiently compiled to SIMD code, where supported by the target hardware:

  • operations on Vector API masks,
  • masked Vector API loads and stores,
  • general Vector API rearrange operations,
  • and Vector API loads/stores to and from memory segments.

Additionally, we’re excited to announce that Vector API support in Native Image is now on par with JIT!🎉 To enable Vector API optimizations when building native images, pass the --add-modules jdk.incubator.vector and -H:+VectorAPISupport options at build time.

One of the areas where the Vector API really shines is large language models, as most of the compute heavy operations there are matrix and vector multiplications. As an example, you can take a look at Llama3.java, a hobby project of our colleague, Alfonso² Peterssen. It’s a one-file local LLM inference engine implemented in pure Java, utilizing the latest features of Java’s Vector and FFM APIs and powerful optimizations coming from GraalVM. By combining all of these, you get a blazing fast local LLM assistant, compiled with GraalVM Native Image, running purely on CPU:

demo of llama3.java running as a native executable

You can see how fast the engine is — with the optimizations coming from GraalVM, the model “responds” blazing fast — in this demo, the 1B parameter Llama 3.2 model with 4.5 bits/weight quantization runs at 52.25 tokens/s. Better yet, by utilizing more aggressive AOT optimizations, such as model preloading at build time, you can completely eliminate startup overhead.

Please note that Vector API support on GraalVM is considered experimental. We would be happy to receive feedback as we further improve the number of optimized Vector API operations.

More efficient applications with Native Image 🌿

Native Image is well-known for giving applications fast startup, low memory and CPU usage, and compact packaging. Lately there is another aspect of Native Image where we see increasing interest from the community —resource savings. Thanks to AOT compilation and optimizations, Native Image application can run more efficiently, reducing resource consumption — including electricity usage 🔋. As an example, we measured energy consumption of Spring PetClinic running on JIT and on Native Image in several scenarios with increasing load:

  • Scenario A: 1 curl request, 1s after curl response
  • Scenario B: 1 curl request, 10s total run time
  • Scenario C: 4000 requests/s, 20 seconds
  • Scenario D: 4000 requests/s, 100 seconds
Energy consumption of a Spring PetClinic application running under increasing load on JIT vs AOT. The measurements were performed using powerstat on Intel i7–9850H @ 2.60GHz machine, by fixing CPU frequency to 2GHz.

As we see, the natively compiled version of the application consistently consumes less energy, even under constant load. Our initial findings are also aligned with a community study performed by Ionut Balosin.

If you are looking for a way to reduce resources usage of your applications, consider compiling them with GraalVM.

New security features in Native Image🛡️

As you might know, Oracle GraalVM Native image offers SBOM support, which is essential for vulnerability scanning. To generate an SBOM file in CycloneDX format, pass the --enable-sbom flag to embed it into a native executable, or --enable-sbom=classpath,export if you want to add it to the resources path or export as JSON.

With this flag you can generate an SBOM this way for any project, but for even more accurate SBOMs, we recommend using the Maven plugin for GraalVM Native Image. The plugin creates a “baseline” SBOM by using cyclonedx-maven-plugin. The baseline SBOM defines which package names belong to a component, helping Native Image associate classes with their respective components—a task that can be tricky when using shading or fat JARs. In this collaborative approach, Native Image is also able to prune components and dependencies more aggressively to produce a minimal SBOM.

These enhancements are available starting with plugin version 0.10.4 and are enabled by default when using the --enable-sbom option.

In this release we also added support for dependency trees: the SBOM now provides information about component relationships through the CycloneDX dependencies field format. This dependency information is derived from Native Image’s static analysis call graph. Analyzing the dependency graph can help you understand why specific components are included in your application. For example, discovering an unexpected component in the SBOM allows for tracing its inclusion through the dependency graph to identify which parts of the application are using it.

Additionally, we added support for class-level metadata for SBOM components. You can enable it with --enable-sbom=class-level. This metadata includes Java modules, classes, interfaces, records, annotations, enums, constructors, fields, and methods that are part of the native executable. This information can be useful for advanced vulnerability scanning to determine if a native executable with the affected SBOM component is actually vulnerable, thereby reducing the false positive rate of vulnerability scanning, and for better understanding which components are included in the native executable. Note that including class-level metadata increases the SBOM size substantially — include it only when you need detailed information about your native executable’s contents.

We have also added SBOM support to GraalVM’s GitHub action. You can now automatically generate a highly accurate SBOM with Native Image and submit it to GitHub’s dependency submission API. This enables simple integration with all the powerful security tooling that GitHub provides:

You can activate this feature in setup-graalvm with the option native-image-enable-sbom.

Build reports

To better understand your Native Image builds and the contents of produced executables, you can use Build reports. Those are HTML reports that can be generated alongside your Native Image build, providing details about the following:

  • Build overview such as the build environment, analysis results and resources usage, which can also be exported for integration purposes;
  • Code area and image heap which can help you understand which methods and objects make up your application;
  • Resources tab showing included, missing, and injected (such as by frameworks) resources;
  • SBOM information that also can be exported as JSON (in Oracle GraalVM);
  • Profiling information visualization represented as a flame graph and histogram (requires supplying profiles).
Native Image Build reports

To get started with build reports, pass --emit build-report when building an application. To learn more about Build Reports, navigate to the docs.

Debugging updates

In this release we introduced several debugging updates:

  • We have added a GDB Python script (gdb-debughelpers.py) to improve the Native Image debugging experience — learn more.
Debugging native images with a GDB Python script
  • We added support for emitting Windows x64 unwind info. This enables stack walking in native tooling, such as debuggers and profilers.
  • We updated debug info from DWARF4 to DWARF5 and now store type information in DWARF type units. This helped us reduce the size of debugging information by 30% 💪

Monitoring updates 📈

We have added experimental support for jcmd on Linux and macOS. jcmd is used to send diagnostic command requests, where these requests are useful for controlling Java Flight Recordings, troubleshooting, and diagnosing applications. To try it out, add --enable-monitoring=jcmd to your application build arguments. See the documentation for more details.

We would like to thank Red Hat for their contributions to this feature.

Usability 👩‍💻

  • We have removed the"customTargetConstructorClass" field from the serialization JSON metadata. All possible constructors are now registered by default when registering a type for serialization. RuntimeSerialization.registerWithTargetConstructorClass is now deprecated.
  • Serialization JSON reachability metadata can now be included in reflection metadata via the "serializable" flag.

Here is how such an entry would look for a regular serialized.Type:

{
"reflection": [
{
"type": "serialized.Type",
"serializable": true
}
]
}

and for a proxy class:

{
"reflection": [
{
"type": {
"proxy": ["FullyQualifiedInterface1", "...", "FullyQualifiedInterfaceN"],
"serializable": true
}
}
]
}

Miscellaneous

  • Native Image now targets armv8.1-a by default on AArch64. Use -march=compatibility for best compatibility or -march=native for best performance within machines with the same CPU features.
  • We added support for Java module system-based service loading — for example you can now specify module Foo { provides MyService with org.example.MyServiceImpl; } in module-info.java.

Community and Ecosystem

  • According to InfoQ’s Java 2024 Trends Report, GraalVM, and specifically Native Image, is now considered a technology being used by the Early Majority. We are happy to see that the ongoing work of our team and community to make Native Image stable, fast, and production-ready, is being recognized. Along with Native Image, we are glad to see GraalPy and GraalWasm featured in Innovators! 🚀
  • Great news for maintainers of projects hosted on GitHub: you can now easily use GraalVM with setup-java!🎉
  • While we are on the topic of GitHub, you can now build with GraalVM using GitHub’s Linux ARM64 hosted runners, and we already added support for it in setup-graalvm get started.
  • We are happy to welcome Sandra Ahlgrimm and Microsoft to the GraalVM Advisory Board!
  • Micronaut 4.7 added experimental support for LangChain4j and integration with GraalPy, so you can invoke Python code from Java easily in a Micronaut application.
  • AWS CRT (Common Runtime) package for Java added support for Native Image. Cold start request processing time using the GraalVM Native Image experienced a 4X reduction for 90% of the requests. Warm start requests took 18–25% less time to process using the GraalVM Native Image.
GraalVM Native Image support in AWS CRT Client for Java
  • With Spring Boot 3.4, SBOM is now auto-detected when building a native image with --enable-sbom=sbom.
  • Quarkus introduced support for Model Context Protocol, which enables AI models to interact with applications and services in a decoupled way, and works great with GraalVM Native Image— see how.

Conclusion

We’d like to take this opportunity to thank our amazing contributors and community for all the feedback, suggestions, and contributions that went into this release.

If you have feedback for this release or suggestions for features that you would like to see in future releases, please share them with us on Slack, GitHub, or BlueSky.

Now go ahead and try the new GraalVM! 🚀

— the GraalVM team

--

--

Alina Yurenko
Alina Yurenko

Written by Alina Yurenko

I love all things tech & community. Developer Advocate for @graalvm, blog posts about programming, open source, and devrel.

Responses (2)