Properly limiting the JVM’s memory usage (Xmx isn’t enough)

Matt Rasband
3 min readJun 3, 2017

--

The JVM is known to be greedy with memory and is infamously difficult to tune. This is pretty apparent when you set your -Xmx and find that the application is exceeding that value, and even more apparent when running a JVM based application in Docker — because the JVM can see the host’s memory in many cases. The error can manifest any number of ways such as higher latencies due to garbage collection or memory swapping, and in some cases (such as in Docker) getting OOMed.

Most solutions to this issue suggest just setting -Xmx256m and calling it a day. Unfortunately, that only limits the max heap size, not the total amount of memory the JVM will utilize as you need to account for metaspace, class space, stack size, and more. You can read a bit more in depth here. In short, the actual maximum utilized memory by your application is a function (credit to the link above):

Max memory = [-Xmx] + [-XX:MaxPermSize] + number_of_threads * [-Xss] 

Of course the JVM itself needs some space to do its thing as well, so there is still a bit of overhead there too.

Long story short, just setting -Xmx is only going to defer when your application shows symptoms of using more memory than expected. Depending on usage volume the symptom can be deferred much longer, but eventually the symptoms of improper JVM tuning will be visible. In cases such as usage in Docker, we had seen cascading restarts due to service dependencies (only things like a central config and service discovery service really caused this).

Dealing with tuning all the potential values manually would be a pain and could need to be re-worked every time you pulled in a new dependency, for example... Which should be something automated. We weren’t going to have that — so we set out to find out how some of the pros do it and stumbled onto the Java build pack for Cloud Foundry. This group would obviously know how to handle deploying JVM based applications since they are involved with the major backers of Spring Framework.

We don’t use Cloud Foundry, even though it’s a great ecosystem, but traced down to their memory calculator 🎉 for Java. During the research it is also hard to know what defaults to use — of course then came sensible defaults from Dave Syer Spring Boot applications. We plugged those in and haven’t seen any major anomalies in our JVM memory usage since:

./java-buildpack-memory-calculator \
-loadedClasses (400 * appJarInMB) \
-poolType metaspace \ # hope you are on java8
-stackThreads (15 + appJarInMB * 6 / 10) \
-totMemory 512M

You can sensibly use that to run your application and be in pretty good shape. I wrote a simple script to do this dynamically at each Docker container boot (to ensure settings always respect the environment, which is a good practice for anything containerized — see the 12Factor App):

You will get a few JVM args to provide to the java command (the inputs in this example are pretty arbitrary and are only there for the output):

$ java-buildpack-memory-calculator -loadedClasses 400 -poolType metaspace -stackThreads 300 -totMemory 1024M
-XX:CompressedClassSpaceSize=8085K -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=15937K -Xss1M -Xmx461352K -XX:ReservedCodeCacheSize=240M

This program will even tell you if your desired allocation won’t likely work well for the JVM and will emit an error (but try to give you as close of values as it can).

Originally I was going to recommend you set -Xms to match -Xmx but Glyn Normington pointed out that causes complications from an autoscaling point of view (see the relevant GitHub issue). If you autoscale based on application memory, it’s best to use their output entirely. However, if you are planning to run more simply with a few servers behind a load balancer — it’s probably good to match Xms and Xmx to avoid suffering random paging issues or memory exhaustion on the host.

Simply enough your docker container (or host) needs bash and the java-buildback-memory-calculator available on the $PATH.

No container crashes due to OOM to date :).

Update 2017/06/05: /u/dleskov on Reddit pointed out this excellent JVM options cheatsheet — Thanks!

Update 2017/12/04: Jochen Mader (@codepitbull on twitter) suggests some added JVM flags, available in 8u131 in this tweet:

-XX:+UnlockExperimentalVMOptions
-XX:+UseCGroupMemoryLimitForHeap
-XX:MaxRAMFraction=1

--

--