Making sense of Native Image contents

What code ends up in the executable and who’s to blame?

Olga Gupalo


Native Image transforms runtime performance of Java applications to better fit the cloud deployment needs. The executables produced by it have instant startup and can consume significantly less memory at runtime, which makes them an ideal target for environments where resources are constrained or expensive and scaling the services up and down is done routinely.

These performance benefits come from precompiling the code of your application ahead-of-time and initializing some classes of the application in advance. So when the applications starts — it is ready to do useful work and doesn’t need the infrastructure for dealing with bytecode loading, interpretation, compiling it with the just-in-time compiler and so on.

Most importantly though, the executables built with Native Image are standalone and don’t depend on the JVM for the execution because the necessary runtime components, like the garbage collector, are built into the same binary. Also, because of the inclusion of the preinitialized heap data and the compiled code of the whole applications, the binaries are typically larger than the JAR files.

In this article we introduce GraalVM Dashboard — a web-based visualization tool to help make sense of the information on methods compilation, reachability, class usability, profiling data, preinitialized heap data and the static analysis results. In other words, you can monitor what classes, packages, and preinitialized objects fill up the executable, see how much space certain package occupies, and what classes objects take most of the heap. All this data is very helpful at understanding how to optimize the application to make its binary even smaller.

Note: GraalVM Dashboard was removed in GraalVM for JDK 22. Instead, use Native Image Build Reports. Build reports provide useful visualisations and comprehensive insights into different metrics of your native executable and the build process itself.

GraalVM Dashboard UI

The dashboard interface hopefully is straightforward: the “Dashboard” tab shows the “Load data” button, the “Help” tab unfolds the menu on the left, and the main window. The dashboard offers three visualization formats, which we will explore further in the post:

  • Code Size Breakdown — showing the size of the precompiled packages and classes
  • Heap Size Breakdown — showing which objects are on the preinitialized heap
  • Points-to Exploration — answering the question why certain classes and methods are included in the native image

GraalVM Dashboard visualizes the data from report files, dumped by the native image builder and containing details about the image.
Currently, it only accepts files in “Native Image Dump Format” extension (.bgv). Depending on the data type you would like to obtain for visualization, you need to pass certain flags when building a native image:

  • -H:DashboardDump=<path> to define the path for the dump file
  • -H:+DashboardAll to dump all available data

GraalVM Dashboard in action

To demonstrate the applicability of the dashboard, we will use the Multithreading demo sample application that does the synchronous and asynchronous threads execution. The business logic of this sample app is straightforward and not very important at the same time. It starts a few threads and every thread loops through exactly the same array of integers and generates a stream of pseudorandom numbers. The programs calculates the time taken to perform the task synchronously and asynchronously.

The demo is comprised of 2 sub-projects, each built with Maven. First, we’ll work with Multithreading Demo Oversized and package its sources into a runnable JAR file with all dependencies. Please note that we are testing the project on GraalVM Enterprise 21.0.0 based on Java 8 for macOS with Native Image installed. On other operating systems or in GraalVM based on JDK11 the absolute numbers of sizes shown here could be a bit different, though not very dramatically, so the main points of this article still stand for them too.

Clone the application, build it and run it:

$ cd multithreading-demo-oversized/
$ mvn package
$ java -jar target/multithreading-1.0-jar-with-dependencies.jar
Synchronous execution for 4 times.
The execution for 4 times takes: 841ms.

Asynchronous threads execution for 4 Threads.
The execution of Thread 1 took: 182ms.
The execution of Thread 2 took: 192ms.
The execution of Thread 3 took: 191ms.
The execution of Thread 4 took: 196ms.
The execution of 4 Threads takes: 280ms.

The build uses the Native Image Maven plugin to build the native binary of the app, and we configured it to produce the diagnostic data with these options:

-H:DashboardDump=dumpfileoversized -H:+DashboardAll

With this configuration on invoking mvn package or similar, the build will produce the dumpfileoversized.bgv, which we will later upload to the GraalVM Dashboard to look for the program potential improvements.

Dumping diagnostic data at native image build time

After the build we can execute the multithreading-image-oversized image and compare the file sizes of the executable and the JAR file:

$ ./target/multithreading-image-oversized
Synchronous execution for 4 times.
The execution for 4 times takes: 424ms.

Asynchronous threads execution for 4 Threads.
The execution of Thread 1 took: 229ms.
The execution of Thread 2 took: 202ms.
The execution of Thread 3 took: 225ms.
The execution of Thread 4 took: 211ms.
The execution of 4 Threads takes: 234ms.
JAR, native image, and BGV file sizes

By compiling to a native executable we can see that program increased in size from 1,8M to 13M. This is because the native image builder packages all necessary runtime parts into itself, pre-initializes some data during the build, and writes it out to the executable. This way it has almost zero startup time as there is no JVM to be warmed up. The dumpfileoversized.bgv file weights 29M because we configured the build to gather the diagnostic data in all formats. Depending on the Java application, the dump file could become larger, and then it makes sense to write the diagnostic information separately.

  • -H:+DashboardHeap - to dump the breakdown of the image heap
  • -H:+DashboardCode - to dump the breakdown of the code size per method
  • -H:+DashboardPointsTo - to dump the points-to analysis information

By default, the dump will be generated in the BGV format. It's possible to dump in JSON or JSON pretty print format, but then we have to specify that explicitly on the command line:

  • -H:+DashboardJson - to dump in white-spaceless JSON format for smaller file
  • -H:+DashboardPretty - to dump in human readable JSON format
  • -H:-DashboardBgv - to NOT dump in BGV format otherwise dump will be done in both formats (note the minus "-" symbol after colon)

Let’s load the dumpfileoversized.bgv file to GraalVM Dashboard to see what was included into the native image that contributes to its overall size.

Upload a dump file window

Code Size Breakdown

The Code Size Breakdown tool is exactly to examine what precompiled code ends up inside the native image, and what Java packages contributed most to its size. Code Size Breakdown displays the breakdown by packages, classes and methods that were included into an image. Package sizes are proportional to the size in the native image. Alternatively, to get to know what content was packaged inside a native image, you would have to run -H:+PrintUniverse and observe the text output.

Code Size Breakdown view

In the dashboard UI you can click on the package rectangle and the dashboard will “zoom” into the selected package group, so you can investigate the sizes of packages further.

The screenshot above demonstrates the content of the multithreading-image-oversized image. At first glance, we can see that great part of our image consists of the package com.fasterxml with ~3M in size! If you’re trying to optimize something, it makes sense to start with the largest bottlenecks, and the visualization in the dashboard can be very helpful for identifying where to start.

Heap Size Breakdown

To understand what objects and of what classes occupied the heap of a native image, we would use the Heap Size Breakdown instrument. Heap Size Breakdown presents a visual summary of the sizes of the preallocated objects of different classes, which were included into a native image heap. The preallocated objects are objects allocated in advance during a native image build and stored in the data section of the executable. Then, at run time, they are directly loaded into memory.

Heap Size Breakdown view

Points-to Explorer and PointTo-SourceLine

The Points-to Explorer instrument allows exploring why a certain method was included into a native image, the sequence of calls to that method and whether we can interrupt it, to avoid this method possible inclusion in the future. The search expands the graph recursively until reaching the entry-point. This visualization is accessible only from the Code Size Breakdown histogram leaf tile, as the Dashboard need to have defined entry point (method) for this visualization.

Points-to Explorer view

The Dashboard is hosted together with the website on GitHub and all visualisation logic happens offline, in the client-side HTML page. The client-side processing might be a bit of a problem if the dump file size exceeds the memory limits defined by the browser. The Points-To analysis specifically can generate quite a large amount of data so if the point of interest is the code size breakdown, you can try to avoid loading the full data dump, and prefer the individual code size / heap data diagnostic data.

If you use VS Code as your IDE and have the PointTo-SourceLine extension installed, you would be able to quickly navigate to a respectful source line in the opened workspace from Points-to Explorer. Every node with source line information, prompts to open the file.

Using the data

In the previous chapter we have noted that the com.fasterxml package takes a relatively large amount of space in the resulting executable. We are going to improve our image size by some code tweaking.

At the beginning of our sample application, we are using Jackson JSON parser to load the configuration file for some default values to be used in place of users input values. These configuration values are for the purposes of the demo, made very simple, and are used only at startup.

If you needed to optimize the executable size and compressing it with, for example, upx is not an option, then the best path would be to change the application code to either use a more lightweight constructs, maybe using a property file for the configuration, or otherwise remove or simplify the dependencies.

Another, expert level solution could be to move the configuration initialization to a static block to load it at image build time and not include the fasterxml classes to the final image.

The other part of our Multithreading demo, Multithreading Demo Improved, contains the changed code. We’ll change to its directory and build with Maven. The numbers got significantly different:

As you can see, we were able to reduce the image size down to 3.7M! Also our dumpfileimproved.bgv had shrunk in size because now it contains less information.

Another important bit to mention is the —-initialize-at-build-time build argument used with the Native Image Maven plugin. It instrumented the native image builder to initialize all classes at the image build time, so the above mentioned package doesn’t need to be used at run time. That decreased the overall size of our executable.

Finally, when these constants are loaded directly to the heap space, we no longer need to include the config.json file from the resources, further improving the image size.


GraalVM Dashboard is an interesting option to visualize the information about which code and data gets compiled into the executables built by native-image. If offers a quick way to identify the largest components contributing to the size and guides the optimization process.

The dashboard is being updated regularly, independently of the GraalVM releases.

If you experience any issue with the dashboard, or have some open question, feel free to share it via Slack or GitHub, or reach out to Ondřej Douda, Aleksandar Prokopec, or Olya Gupalo directly.

GraalVM Dashboard has been actively used internally lately, by the GraalVM team and proves to be useful. Give it a try, get insights for your applications, and get back to us with questions or any feedback!

The article is written in cooperation with Ondřej Douda.