GraalVM Quick Reference
Find more recent quick references here.
There are three main ways how GraalVM can help with Java applications: making them faster by using the state-of-the-art JIT compiler, compiling them into standalone native executables with instant startup and low memory consumption, and enhancing them with libraries or code in other supported languages.
This quick reference is a one-page short summary of what GraalVM can do and the key options and commands to illustrate its capabilities.
You can download and print it. It fits neatly on an A4 paper, so you can hang it in the office and use it as a reminder of what GraalVM can do and which options turn it on. Be sure to grab the pdf version for printing so it looks neat and sharp. Note that if your printer is more comfortable with the US letter paper formats please make sure to get this version, it’ll look better.
In this post, we’ll go over the information in the quick reference and describe it in more detail.
The first part is focused on running Java applications. Foremost, it’s good to remember that the GraalVM distributions contain a JDK with all the usual JDK tools, meaning you can use GraalVM as your JDK. For example, compiling the Java source code with the
Naturally, you can also use GraalVM to run your Java applications or any other JVM languages. In addition to being able to run Java, GraalVM benefits from its powerful just-in-time (JIT) compiler and achieves state-of-the-art peak performance to often run them faster than other JDKs.
java -jar MyApp.jar
When running Java applications just-in-time with GraalVM, the underlying JVM is the usual Java Hotspot (tm) VM, which means that most of the configuration options are equivalent, for example, specifying the classpath for your application works like this:
java -cp target/myapp.jar com.example.Main
The GraalVM compiler is normally run in the precompiled to a native shared library mode. But its code is written in Java and it can be used as a JAR file, which results in a slightly different performance profile: using heap memory for the compilation at the expense of the longer warmup (for example, Java code of the compiler needs to compile). The following option configures which mode you want to run in (enabled
+ by default):
In addition to running the compiler as a JAR or a native library, you can specify the configuration for the optimizations by picking the
economy for faster warmup, and
enterprise for the best peak performance (requires GraalVM Enterprise of course).
The JIT compiler impact can be dramatic speeding up the applications quite a lot, but sometimes it’s unclear whether the compiler is working, whether the code actually reaches the top tier, and which methods get to the compiler.
In addition to looking at the logs you can enable more debug output, for example, to print the compiler graphs to analyze them for finding additional optimization opportunities:
And of course, other features of the JVM work with GraalVM too. For example, you can attach a Java agent that will instrument the code, generate classes at runtime, and other Java “agenty” things. Both the Java-based agents and the native agents work. One notable example would be the assisted configuration agent for simplifying native image builds.
The second major advantage of using GraalVM is its Native Image capabilities: compiling your application ahead-of-time into a native binary.
Update: We’ve published a separate quick reference for the
native-image, please check it out: Native Image quick reference.
In order to use that you need to install the
native-image component. One way to do this is to download the component’s JAR file for your GraalVM distribution and run the following:
gu install -L native-image.jar
Then you can use the installed
native-image utility to prepare the native binary of your application:
native-image [options] MyClass
Alternatively, you can use the JAR file syntax similar to the
native-image -jar MyApp.jar
Run the resulting binary like any executable file:
If instead of the executable you would like to build a shared library, you can do that by passing the
--shared option. You need to mark the methods to be exposed with the
@CEntryPoint annotation, but a more detailed exploration of this topic is beyond the scope of this article.
Another very useful possibility is to build statically linked binaries where the OS libraries like
libc are linked into the executable. It’s even possible to pick which
libc implementation to use.
glibc is used by default,
muslc is an option, for which you need to prepare the build environment a bit.
For example, each of these will include the support for the corresponding language:
Another fascinating capability is the profile-guided optimizations for the native image executables. You can generate an instrumented binary, apply relevant workloads to it, record the profile of the code executed similar to what JIT does, and use those profiles to build the production binary.
native-image — pgo-instrument MyApp
native-image — pgo profile.iprof MyApp
And, if you’d like to have more visibility into what happens during the native image build process, for example, trying to understand the class chain through which a class got initialized, you can use a selection of the helpful options.
Tracing the initialization path to a certain class is enabled by the following one:
The native image build is a Java process so you can put a breakpoint into code and attach a debugger to it to have full visibility into what’s happening.
In addition to these, there are a ton of other useful options to configure the native image build and the runtime behavior, which we’ll explore in the future, but you can get a gist of what’s available using the expert help option:
On top of that there are a number of language launchers that you can use for running programs in the supported languages:
The launchers, including
node, are by default run in the native mode, where the interpreter is compiled as a native image binary. So, to enable interoperability with the JVM to use Java classes, use the
--jvm option and for other languages
The language engines come with a number of features to limit the number of resources, like, for example, the amount of time in milliseconds the language context can run:
And last but not least, the GraalVM languages have support for common developer tooling out-of-the-box. This is one of the most exciting parts of the whole GraalVM ecosystem — implement a language interpreter fixing the semantics of the language and get a powerful virtual machine, a selection of GC algorithms, a debugger, a profiler, a memory analyzer, and other tools for free.
Specify the following options to enable, respectively, the debugger based on the Chrome DevTools debugger, the sampling profiler, the tracing profiler, and the memory analyzer:
From using a better just-in-time compiler to building native executables of your applications to running components in different languages — you can use GraalVM today.
In this quick reference, we tried to outline the most frequently used options that describe the different capabilities of GraalVM.
Download it, print it out, and pin it on the wall. Hopefully, it’ll keep reminding you about how interesting the GraalVM project is even if you currently only using a fraction of what it can offer.
And while you’re at it, get the GraalVM distribution and try a few things from the quick reference like running your Java apps faster, making microservices more cloud friendly with native images, or enhancing your apps with libraries in other languages!