Benchmarking your Gradle builds

Sakis Kaliakoudas
Apr 11, 2019 · 6 min read

Build times have historically been a problem in Android, with engineers usually being happy if they “only” wait a couple of minutes when making a small change and pushing it to the device, while longer waiting times are typical.

The more time you spend waiting for something to build the less time you actually spend on engineering. Some people end up moving to another task while waiting for a build, but that means that they pay the price of context-switching, and if that other task is checking the /r/androiddev subreddit, well we all know how long that takes :)

Here at Babylon, the Android build takes around 2.5 minutes for a clean build, with incremental builds taking a bit less than a minute. These times are for builds that run on Mainframer build servers, with dev machine builds taking slightly longer.

For our team of 17 engineers, a small increase of 10 seconds for a build has the following impact:

10 seconds x 17 engineers x 20 builds a day x 230 working days per year (ish) = 782,000 seconds or about 27 days!

Those 27 days could have been spent better, refactoring some screens, or introducing a cool new technology rather than waiting!

To combat this situation we wanted to make it easier for everyone to understand and measure how build configuration changes (e.g. enabling dex-in-process for our project) or application modularization can affect our builds times for better or worse. Understanding your build times is not as easy as it sounds:

  • You need to run a build many times to get an average, which means you need to get the calculator out and add the numbers yourself 😅.
  • Typically you need to run this on your dev machine.
  • Using your dev machine for anything while benchmarking might affect the results.
  • Any unrelated process running in the background might affect the results as well.

Looking around we quickly found the Gradle Profiler project. With this tool, you can do 2 things, benchmark and profile a build.

From the library documentation:

“Benchmarking simply records the time it takes to execute your build several times and calculates a mean and standard error for it. It has zero impact on the execution time, so it is ideal for making before/after comparisons for new Gradle versions or changes to your build”.

While for profiling:

“Profiling allows you to get deeper insight into the performance of your build. The app will run the build several times to warm up a daemon, then enable the profiler and run the build.”

So to sum things up, if you are after timings for your Gradle build then you should run your build against the tool using the benchmark option, while if you want to get insights about the CPU utilisation, memory allocation, etc. then you should run it with the profile option. For this article, we’ll focus on benchmarking as we are after some timings for our builds, and not really insights on why things take as long as they do (without saying that this is not an interesting insight as well).

To run a benchmark, you first need to check out the gradle-profiler project and run the following command:

./gradlew installDist

which will install the gradle-profiler executable into ./build/install/gradle-profiler/bin. The next step is to run the following:

gradle-profiler --benchmark --project-dir <root-dir-of-build> <task>

where <root-dir-of-build> is the directory containing the build to be benchmarked, and <task> is the name of the task to run, exactly as you would use for the gradle command. The results of this benchmark will be returned in 2 different formats, a CSV file containing all the timings along with some basic stats around it like the mean, median, standard deviation, etc., and an HTMLfile with a plotted graph for these timings.

An example gradle-profiler CSV report

Some interesting configuration options that the tool provides are the following:

  • The ability to define how many times you want your builds to run.
  • The number of warm-up builds to complete before running the measured builds.
  • Complex build scenarios, for example, you can ask the gradle-profiler to run incremental builds, adding a public method to a Java or Kotlin source file, or adding a string to an Android resource file.
  • Overriding system properties, which, for example, can be used to disable the build cache.

You can see the comprehensive list of options in the README file of the project.

An example gradle-profiler HTML report

By using this tool we have solved the first problem of running the build multiple times and calculating the average. However, we are still running gradle-profiler locally, making the dev machine unusable for the duration of the benchmarking.

Moving to the cloud

What we ended up doing to achieve what we wanted is to write a small Jenkins job to do the following:

  • Given 2 branch names start up a Jenkins node in AWS
  • Check out the gradle-profiler project and build it
  • Benchmark and report against two provided branches (sequentially)
  • Generate reports for both of these.

The benefits of this are clear:

  • You can continue to use your computer, the benchmarking will run on AWS, and you’ll get the results after a while; how long depends on how complex the build scenarios you define are, and how many iterations you perform. For us, this usually takes about 2 hours.
  • The gradle-profiler tool is essentially abstracted, making this very easy to run, with sensible defaults scenarios.
Our Jenkins job parameters. We use defaults, so you only need to override the two branch names.

So after implementing this, any of the Android engineers in the team can go into our Jenkins portal and run a comparison between 2 branches, to get a sense of what kind of benefit a build configuration or application modularization can bring to the project. So far, using gradle-profiler on Jenkins, we have found the following:

  • Updating the min SDK in dev builds from 19 to 23 improved build times by about 15%.
  • Reducing our 40 SDK modules to 5 has no impact on build times that impact developer builds.
  • Detekt runs faster when added to individual modules rather than run once across all modules, because of parallel execution.

I should note that we initially experienced some issues with the precision of this approach, as running a benchmark comparing the develop branch to develop reported back differences in build times of around 10%. We investigated whether the CPU cycles allocated to our AWS node instance were fluctuating, but the issue disappeared, and before we knew it we were getting up to 1% discrepancy between 2 identical builds, which I would say is acceptable.

As a next step, we want to attribute build time regressions to specific commits. We are already using Gradle Enterprise so have some history of our build times across time, however, these are inconsistent as our usual CI builds don’t have a warm-up phase.

To that extent, we are planning to use thegradle-profiler running on a nightly build on our develop branch to alert us when the build times increase over a certain threshold. That would then be a great starting point for someone to investigate the commits of the day.

P.S: Couldn’t resist:

Babylon Engineering

At Babylon we’re on a mission to put an accessible and…

Babylon Engineering

At Babylon we’re on a mission to put an accessible and affordable health service in the hands of every person on earth. The tech we use is the heart and soul of that mission. Follow our Medium blogs to learn more about how we do it.

Sakis Kaliakoudas

Written by

Android Engineer @ HSBC

Babylon Engineering

At Babylon we’re on a mission to put an accessible and affordable health service in the hands of every person on earth. The tech we use is the heart and soul of that mission. Follow our Medium blogs to learn more about how we do it.