Running a JVM inside a container; What you need to know

Paul Murphy
Tech @ Domain
Published in
3 min readJul 19, 2018

For most of my career, an infrastructure team has been responsible for managing the runtime environment of the applications I’ve worked on. That included management of the JVM available on the servers in those environments.

You build it, you run it — Werner Vogals

But these days development teams are becoming more and more responsible for their applications in production, and thanks to containerization and Docker we now have the ability to manage the configuration of the JVM.

When setting up our projects for the first time, I decided to avoid explicitly tuning the JVM with heap size parameters, and choose to let the JVM ergonomics do it for me. One less configuration to worry about. However, the results were not what I expected.

What is JVM Ergonomics? Ergonomics provides platform-dependent defaults for the garbage collector, heap size and the runtime compiler. These defaults should match the needs of different types of applications while requiring less explicit tuning. Where memory is concerned the follow applies:

  • Initial heap size of 1/64 of physical memory
  • Maximum heap size of 1/4 of physical memory

Perfect. Those seem like very reasonable defaults to me. However…

Docker container exited with non-zero exit code: 137

The exit code 137 is effectively the result of a kill -9, which for a Java application running in a container is usually the result of the application hitting an OOM(out of memory) condition. But the JVM ergonomics should have set the maximum heap size to be 1/4 of the available memory? So what went wrong?

How the JVM determines physical memory inside a container did not work how I expected. And the reason why is a feature of the Linux Kernal called cgroups.

Containers are made possible by using Kernal features of the operating system, one of which is called cgroups, which isolates the resource usage (CPU, memory, disk I/O) of the containers. But not all things know about cgroups, which can lead to strange results, as we can see when we run the free command:

docker run -it -m=100m --memory-swap=100m centos free -h
total used free shared buff/cache available
Mem: 7.8G 2.4G 3.2G 1.1M 2.1G 5.1G
Swap: 1.0G 0B 1.0G

And JDK8

docker run -m=100m --memory-swap=100m openjdk:8 java -XshowSettings:vm -version
VM settings:
Max. Heap Size (Estimated): 1.73G
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_171"
OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~deb9u1-b11)
OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode)

In both commands, I start a container, with a 100MB of RAM and swap. Yet the free command shows a total of 7.8G. And the JVM has allocated 1.73G to the Max Heap. These values are based on the total allocated to the docker machine, and not the container.

The JVM, prior to JDK10, is unaware of cgroups. So ergonomic calculations are not based on the limits for the container, but by the memory available to the host. Hello OOM!

To fix this we need to set the JVM max heap size, but I’m still not inclined to want to explicitly set these. Ideally, the JVM would still provide sensible defaults. And as it turns out the JVM has some additional flags which allow it to do this.

-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap

With these added to our java command, we should see the correct heap size calculations;

docker run -m=100m --memory-swap=100m openjdk:8 java -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XshowSettings:vm -version
VM settings:
Max. Heap Size (Estimated): 44.50M
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_171"
OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-1~deb9u1-b11)
OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode)

Perfect. The JVM is now using the correct resource allocations for the container and setting the Max heap size based on those values.

If the JVM ergonomics calculations for the max heap are not what you would need, you can also add the -XX:MaxRAMFraction flag which allows you to change the percentage of the ram to allocate to Max heap. the default values is 4, or 1/4 of the total available. -XX:MaxRAMFraction=2 would change to be half of the total.

--

--