Memory Profiling with Visual VM

Kavindu Vindika
6 min readJul 3, 2022

Have you ever experienced out of memory error ? If not, then that’s great to hear.

Out of memory error in Java

Moving from low level to high level computation, a huge improvement in human readability of programming can be seen. But it’s a trade off between easiness of software development and higher resource utilization. In hardware development, we’ve the complete authority over resource utilization and memory handling. Memory allocation in FPGA and linker script to manage memory in baremetal C for SoC processors can be taken as embedded developments that’re handling memory itself.

But it’s not the same with high level processing where dynamic memory allocation is possible.

Memory Layout in C and Java

In C programming, stack and heap can be seen as the main regions of the memory layout. Even if the stack automatically free itself once the local variables are done with referencing, for heap our direct intervention is necessary for dynamic memory allocation.

In Java programming, all objects are created in heap and references are stored in stack memory of the relevant thread. JVM or Java Virtual Machine can be divided as follows.

Memory Division in JVM

All the new objects are allocated in space called Eden or Young Generation. Once it’s full, garbage is collected, promoting survived objects resides in Eden to survivor space - S0 and S1. Old Generation contains long surviving objects.
Permanent Generation or meta space consists of meta data for the run time classes and application methods. Actually, this is out of JVM heap and it’s allocated from native space.

Minor garbage collections or GCs are happening in eden space, once it’s filled up. Then still referenced or survived objects are moved into S0 and then again to S1, if it survived another GC round. If objects are lucky enough to be still referenced, then such objects are moved into old memory. When old memory is full, it performs a full GC or major GC to remove unused data from the heap.

Memory Leaks in JVM

JVM memory leaks happen with reachable GC roots or object references which still exist but not useful for the application. As a result, these objects are not possible to remove from the heap which grow in time.

Therefore, memory leak is a function of time, not a function of load. If the application ends up with a huge memory consumption causing an out of memory error when performing a load test, that doesn’t mean application has a memory leak. It only means it’s a memory intensive task. Therefore, allocate more memory and check again with time whether application memory consumption grows with time.

Now we can decide what are the symptoms of memory leaks.

  • Heap growing over time
  • GC frequency growing up
  • Eventually, out of memory error

Following figures are taken from visual VM monitoring platform to illustrate the stabilized and destabilized versions of memory consumption.

Stabilized Memory Consumption

Blue color line shows the actual memory used by the application and orange color line shows the memory that is allowable for the application.

With API invocations, heap consumption rises (blue line) but due to the garbage collection, all the unnecessary objects are removed from the heap. As a result, memory consumption goes down. Time to time, it fluctuates due to API invocations, but eventually, it flattens to the same level as the application starts due to GC process signifying that the application is in a stabilized situation without any memory leaks.

Stabilized Memory Consumption

Destabilized Memory Consumption

If the application contains any memory leaks, then memory consumption grows with time. Additionally, memory that can be used by the application (orange line) rises which could adversely affect the server where application running (if maximum ram size of the JVM is not defined - let’s discuss it later).

Garbage collection tries to bring down the memory consumption (small dumps in blue line), but it seems not effective due to memory leaks. Once the API invocations are stopped, memory consumption keeps at the same level without going down.

Destabilized Memory Consumption

Monitoring JVM

To monitor JVM, we can use jmap and jstat as cli tools and visual VM as a GUI tool. Additionally, AWS provides CodeGuru integrated with machine learning.

In this article, I’ll use visual VM to monitor a sample java application created using spring boot framework.

Set up Visual VM

  1. Download visual VM
  2. For linux and windows, extract the zip and go into bin
  3. Run shell script (.sh) for linux and run EXE file (.exe) for windows

Set up Project

  1. Clone the sample API project repository
  2. Build the project and create the JAR file (If you’re using IntelliJ to create JAR file, then you can find it named as mem-handle-0.0.1-SNAPSHOT.jar in /target directory)
  3. Run the JAR file java -jar mem-handle-0.0.1-SNAPSHOT.jar
  4. Open swagger on http://localhost:8080/swagger-ui.html and check whether all the API endpoints are working fine
  5. Now check our java API can be viewed under applications panel (left side panel) in the Visual VM and select it
  6. Then move into monitor tab to inspect the CPU and memory usage of the application
Swagger UI of the application
Visual VM monitoring our application

Monitor memory usage of the application

By invoking users/non-static endpoint, we create a list of objects that are local to the relevant method. Once the method invocation is over, such objects will be removed from the memory during garbage collection. As a result, you can see a stabilized memory consumption.

But when you invoke users/static endpoint, we append a static list of objects which isn’t going to be removed from the memory once the API invocation is completed. Therefore, it’ll remain in memory and will grow in time. This can be identified as a destabilized version of memory consumption.

Such destabilized memory consumption could cause the JVM to seize the total amount of memory in the machine which we really need to avoid…!

FYI — Invoke API endpoints non-stop or multiple times to view their memory consumption as similar to the figures given above

How to avoid memory leaks

Use following JVM configurations to restrict heap, meta space and total ram size allowable for the application

  • Xms
    Maximum memory allocation for JVM
  • Xmx
    Initial memory allocation for JVM
  • XX:MaxMetaspaceSize
    Maximum size of the meta space
  • XX:MetaspaceSize
    Once the meta space reaches this level, a full GC will be induced
  • XX:MaxRAM
    Restrict the JVM memory usage beyond this limit. If it tries to exceed, then it will result in outOfMemoryError causing the application to crash

Use the following command to run JAR file with the above JVM flags

java -Xms200m -Xmx200m -XX:MaxMetaspaceSize=75m -XX:MetaspaceSize=75m -XX:MaxRAM=275m -jar mem-handle-0.0.1-SNAPSHOT.jar

Now you’ll see JVM is restricted to use 200 mb as the maximum heap size (orange color line). Following figure was taken while invoking /non-static method.

JVM memory restriction

If you run /static method now, then you can see allowable heap size won’t grow as before. Therefore, once JVM hits the maximum heap size, then application will result in outOfMemoryError . At this point, full GCs will be performed frequently to reduce the memory consumption to avoid application crashing. That’s why you can see a huge spike of CPU usage in the following figure.

Huge spike of CPU usage to perform full GCs and blocking JVM from consuming more memory

Check the application logs and you’ll find it’s throwing heap space error as follows.

Out of memory error caused by the application

I hope now you understand the basics of visual VM and why it’s essential to restrict the memory consumption of your applications to avoid adverse server breakdowns.

--

--

Kavindu Vindika

AWS Certified Solution Architect - Associate | Senior Software Engineer @SyscoLABS | AWS Community Builder