Setting Java Heap Size Inside a Docker Container

Fredrik Fischer
Nordnet Tech
Published in
5 min readSep 22, 2023

--

Photo by Venti Views on Unsplash

Running Java applications in a container might seem like a trivial task, but there are some pitfalls that can cause problems in production. This article will explain how to set the Java Heap Size inside a Docker container.

Virtual memory used by a Java process extends far beyond just Java Heap. JVM includes many subsystems:

  • Garbage Collector
  • Class Loading
  • JIT compilers etc

all these subsystems require certain amount of RAM to function. JVM is not the only consumer of RAM. Native libraries (including standard Java Class Library) may also allocate native memory. And this won’t be even visible to Native Memory Tracking. Java application itself can also use off-heap memory by means of direct ByteBuffers.

In short JVM needs additional memory (code cache, off-heap, thread stacks, GC data structures..), as does the operating system.

Problem Statement

When running Java in a container and using the heap memory as an indicator for the memory consumption there is a risk that the JVM application gets killed due to that the container running the JVM allocates more memory than the memory requested from Kubernetes.

How does this happen?

When Kubernetes notices that the container is using more memory than allowed by the deployments limit section.

apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
memory: "300Mi"
cpu: "250m"
limits:
memory: "300Mi"
cpu: "500m"

Kubernetes kills the container and restarts it, this happens silently without Java noticing this, and will therefore not throw an exception.

The “missmatch” between the heap usage and the total memory consumption of a container can be seen in the figures below which depicts the memory consumption of an application from Kubernetes point of view (Upper figure) versus Java point of view (Lower figure):

Running “docker stats” provides the statistics above
Using VisualVM on the JVM running in the container provides the statistics above

Comparing the figures allows us to make the conclusion that:

  • [Kubernetes] sees that the app is currently consuming 178MB (169.8MiB) of memory
  • [Java Application] says that it is consuming 79MB (75.3MiB) of Heap memory
  • [Java Application] says that it can allow the Heap memory to grow to 243MB (231.7MiB)

And we can also conclude that both [Java Application] and [Kubernetes] are correct.

The problem is that one can easily fall in the trap to think that it is only the Java Heap Memory that consumes the container memory.

On the contrary, the smaller the application the larger percentage will be required for other memory buckets that the Java application needs in order to work.

Typical Java memory footprint

To restrict the heap memory consumption will partially affect the total memory consumption, but it will not guarantee that the application will end up with a Out Of Memory (OOM) exception.

There is two scenarios that can trigger an OOM exception.

Kubernetes — OOM Exception

In this scenario, the Non-Heap is consuming more memory then intended. When the heap memory increases it makes the total container memory grow beyond the maximum allowed memory for the container. This scenario will force Kubernetes to force a restart of the container.

This kind of restart can be hard to detect but can be seen by describing the pod:

kubectl describe pod <POD_NAME> -n <NAMESPACE>
Name: test-webapp-d5f9b9d8d-flqjk
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 06 Jun 2022 18:00:04 +0530
Labels: pod-template-hash=d5f9b9d8d
run=test-webapp
Annotations: <none>
Status: Running
IP: 10.244.0.16
IPs:
IP: 10.244.0.16
Controlled By: ReplicaSet/test-webapp-d5f9b9d8d
Containers:
test-webapp:
Container ID: docker://d581e3e779ae164630de23594b0c4df8c1eecacdbd6b0b7e68655656d37c7491
Image: k8s.gcr.io/hpa-example
Image ID: docker-pullable://k8s.gcr.io/hpa-example@sha256:581697a37f0e136db86d6b30392f0db40ce99c8248a7044c770012f4e8491544
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 137

Notice the fields “Last State” and the “Reason”.

Java — Heap is requesting more memory than allowed

This scenario is more familiar to most developers, the heap memory have exceeded the limit. This causes an exception in the application logs:

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at com.example.OutOfMemoryErrorExample.generateOOM(OutOfMemoryErrorExample.java:6)
at com.example.OutOfMemoryErrorExample.main(OutOfMemoryErrorExample.java:10)

To avoid both these scenarios, we must:

  • Be aware of the memory required for the other required buckets of application memory
  • Set restrictions for the Heap memory and taking into consideration the other memory buckets

We suggest using the following JVM Parameters (given a scenario with a 30% memory overhead):

-XX:MaxRAMPercentage=70.0     // In this case the overhead (the other memory buckets requires) is 30%
-XX:InitialRAMPercentage=70.0 // Should be set equal to MaxRAMPercentage
-XX:+ExitOnOutOfMemoryError // To make sure that the JVM restarts and recovers from a OOM killed occurence

Rationale:

  • MaxRAMPercentage: Setting the maximum heap size for a JVM, in percentage of the allocatable memory of the container (the container memory limit, cgroup/memory/memory.limit_in_bytes). The heap size is set on startup and does not change during runtime.
  • InitialRAMPercentage: Setting the initial heap size for a JVM, setting initial heap size and max heap the same has some advantages. You will incur lower Garbage Collection pause times. Because whenever heap size grows from the initial allocated size, it will pause the JVM.
  • +ExitOnOutOfMemoryError: JVM will exit right when OutOfMemoryError is thrown. This will force a restart and prevent erroneous state.

Final words & Summary

When running a Java application in a container, remember to keep track of OOM occurrences, they are often more frequent in the beginning when deploying an application.

To keep track of the OOM occurrences, make sure to:

  • Check the application logs
  • Check the Kubernetes pods for restarts and investigate why the restarts happened

To be proactive with OOM occurrences, make sure to know your application, how much dependencies and threads are used in the application, this will increase the memory footprint. Use the insight and set the JVM flags to suitable levels for your application:

-XX:MaxRAMPercentage
-XX:InitialRAMPercentage
-XX:+ExitOnOutOfMemoryError

References

--

--