Alpine vs. Debian Container Images for Java / JVM Builds

Rishi Goomar
Rocket Travel
Published in
3 min readAug 13, 2019

At Rocketmiles, we use a lot of JVM-based technologies like Kotlin, Groovy, Java, Gradle, etc. We have a containerized setup of Jenkins 2 to build and deploy these applications. As we started to scale, the need for custom build containers grew. That way we can speed up the process by loading dependencies ahead of time within the container image to prevent downloading them on each build. As part of this effort, we were trying to keep the agent containers as small and as fast as possible by using Alpine base images. Otherwise, we were adding additional bloat to our current image that already has more tooling than necessary for any given app.

The Results

Alpine has been shown to be a great way to get a small image for your container images. Although, what we found was that we saw an increase in the time it took to run our tests and builds when using Alpine as a base for our Gradle builds. Alpine was taking about 10%-20% longer than our baseline (Ubuntu-based).

Why?

We started to dig into what was going on here. How could our Gradle builds take longer on a smaller lightweight distribution that is built for size and speed? What we found was that it had to do with the underling libc system library being used for Alpine, musl .

The underlying libc implementation was affecting the performance of the JVM during the build and test runs. Frankly, this was new to me and very interesting. Many Linux distributions utilize glibc. Debian uses glibc and since Ubuntu is a debian-based distribution, it was also using glibc under the hood which made our Gradle builds & tests faster.

Musl vs. Glibc

There’s a really great StackOverflow answer that goes through benchmarks of a Python app and explains why it happens to be slower. One of the key notes from that is how the person points out that the benchmark for musl is slower than glibc for memory allocation timings:

Looking again at the benchmark, musl is really slightly slower in memory allocation:

                          musl  | glibc
-----------------------+--------+--------+
Tiny allocation & free | 0.005 | 0.002 |
-----------------------+--------+--------+
Big allocation & free | 0.027 | 0.016 |
-----------------------+--------+--------+

I’m not sure what is meant by “big allocation”, but musl is almost 2× slower, which might become significant when you repeat such operations thousands or millions of times.

So, if you take this into the context of Gradle & the JVM, running large sets of tests will have many memory allocations. This is likely what is contributing to the performance difference when allocating and freeing memory for the JVM heap.

Optimizing Our Image

We ended up using a lightweight Debian base image over the Alpine image since it was performing better and was only 10% larger in size.

If you are going to be running an application or build agent on containers, be sure to test the performance on Alpine vs. Debian (or your base image of choice). Depending on your workload, there is a chance that a distribution using glibc may be more effective.

If you care about performance and enjoy solving these kinds of problems, come join our team!

Extra note: There are efforts to build an Alpine image with glibc that can be useful for those that experience a similar issue or need to utilize certain libraries (i.e. Oracle Java).

--

--