Garbage Collection in Elasticsearch and the G1GC

Prabin Meitei M
Oct 4, 2018 · 9 min read

The Investigation

For our centralized logging the elasticsearch cluster is made up of the following:
Elasticsearch Version: 6.2.2
No. Of Indices: 375
No. Of Shards including Replica: 1500
No. Of Nodes: 4, each with configuration (24 core CPU, 64GB RAM and 2TB SSD)
Java Version: 1.8.0_151
JVM heap allocated on each node: 28GB
There is no dedicated master. All the 4 nodes act as master and data nodes.

Garbage Collection In Elasticsearch

In Elasticsearch the default Garbage Collector is Concurrent-Mark and Sweep (CMS). Elasticsearch comes with very good default garbage collector settings, that they even suggest against tinkering it. The CMS is a big improvement over the older parallel GC. As mentioned in the documentation

  • Having a small heap size leading to less space for long lived objects.
GC Duration with CMS GC

Garbage First Garbage Collector (G1GC)

One of the issues with CMS was the need to collect the whole old gen at once whenever the GC is triggered. This is also attributed by the nature of contiguous memory regions in CMS. Hence the performance for CMS degrades with increase in heap size.
The G1GC follows the model of non-contiguous memory regions. It divides the whole heap into smaller regions, by default targeting 2048 regions. The size of the regions are not changeable and is a power of 2 between 1MB — 32MB. Each region can be the young gen or the old gen. The advantage of having more memory regions is that the GC can decide to analyze only part of the heap by choosing regions where there is more garbage. The number of regions to choose is also decided by target pause time. In short, the GC is run for a smaller memory region avoiding the collection of whole old gen at once, thereby reducing the GC pause times. More details about G1GC can be found here. The G1GC tend to perform better with larger heap sizes, generally, greater than 10GB.

G1GC on Elasticsearch

For our use case, each node on the Elastic cluster was allocated a heap of 28GB. This definitely suggest using the G1GC instead of the default CMS. As seen above, the GC pause duration was also quite high for CMS. But the biggest deterrent was the information on Lucene java bugs page

GC Duration with G1GC

CPU Usage in G1GC

The improvement in GC duration did not came free of cost. As expected, with G1GC, there was increase in CPU load average. The comparison in CPU load averages while using CMS and G1GC is shown below.

CPU load average with CMS (created using zabbix)
CPU load average with G1GC

Humongous Objects and Allocations

In G1GC the memory region size plays a very important role. As stated earlier, the default value of region size is decided by the jvm, trying to achieve 2048 regions, and keeping the size as power of 2 and in the range of 1MB to 32 MB. This translates into the following region sizes as per allocated heap.

Default region sizes as per allocated Heap
GC causes for region size 8MB
GC causes for region size 16MB


The G1GC definitely provides an improvement in terms of GC pauses and overall cluster performance in our use case. The issues related G1GC on Lucene seem to have been resolved or at least it didn’t came up in our scenario. CMS still performs very well in cases of lesser heap space. The decision to use G1GC, specially in elasticsearch, should be taken basis the application need in terms of throughput and cluster configurations. G1GC should also be configured accordingly. One thing to remember is that G1GC does not avoid stop the world pauses, it just tries to maintain a small pauses. It is helpful to use tools to analyse your GC logs to figure out how the JVM is handling the memory.

Naukri Engineering

Think, Develop, Rollout, Repeat. The world class recruitment platform made with love in India.

Prabin Meitei M

Written by

You must ask the right Questions!!

Naukri Engineering

Think, Develop, Rollout, Repeat. The world class recruitment platform made with love in India.