Evolving our Android CI to the Cloud (2/3): Dockerizing the Tasks
Could you sleep well knowing that your most critical development process can only be run on a single machine? I doubt it. š¤ Having your CI/CD tied to a specific piece of hardware is a disaster waiting to happen. Eventually, it will need to go under maintenance or, even worse, have a meltdown. There is a better way š³
In this post, weāll dive into the nitty-gritty of dockerizing your Android tasks ā the foundation of our Cloud-based CI. You may wonder why you would dockerize your pipelines. Well, let me give you just three compelling benefits:
- Reliability: Allows your pipeline to run virtually anywhere: on-premise, Cloud, etc. š No more costly disruptions in case of hardware breakdowns.
- Stability: Guarantees a stable execution environment that only changes if we decide so.
- Isolation: Tasks donāt interfere with one another. They run separately.
As the old popular Java motto goes:
āWrite once, run anywhereā
Missed the kickoff of this series? Check it out to discover the story of our original CI/CD setup and the reasons that drove us to replace it with a dockerized CI.
Letās get down to it š»
Creating the Android Build Image
Creating a Docker image is relatively easy for tasks like building your app and running your non-instrumented tests because neither of them requires an Android emulator or device.
In essence, these are the primary components that our image will include.
š³ Before we continue, if youāre new to Docker, this is a good starting point:
You can find the Dockerfile for our android-build image in this Gist. Letās break it down to make sense of it šØ
Base Image and Packages
The initial step involves selecting a base image and installing the necessary toolchain. While Android images are readily available, our preference is to have only the essential tools. Therefore, letās build it from the ground up, using Ubuntu 23.10 as our foundation.
We set some environment variables to define where the Android framework is located and also feed the PATH to simplify the interaction with the SDK tools. Note that we are adding support for all architectures although we are using only x86_64 hardware. In the image, we also include some other packages that will be useful for running CI scripts (e.g. python) or launching the tests selectively (e.g. git).
Android SDK
Next, we need to install the Android SDK and the platform tools. Here we use Android 31, but feel free to choose the version that aligns best with your specific needs.
To find out which packages are available for the different architectures, just list them with the sdkmanager:
$ANDROID_HOME/cmdline-tools/7.0/bin/sdkmanager --list
Another thing to handle is the licenses acceptance. This is usually a manual process when you install the latest SDK from Android Studio. Fortunately, there is a handy way to accept it via SDK:
yes | $ANDROID_HOME/cmdline-tools/7.0/bin/sdkmanager --licenses
Additionally, youāll need to copy them to the SDK location under the licenses subfolder. Check out the documentation for further details.
Finally, we just build our image:
docker build -t android-build:1.0 .
And, ta-da! š°, we have our Android image where we can build our app and run the non-instrumented tests. Letās verify that it works by just opening a terminal in the path where your Android project is located and running the following command:
docker run --rm -v $(pwd):/project android-build:1.0 bash -c './gradlew testDebugUnitTest'
A new Docker container spins up, kicking off the execution of the unit tests:
Dockerizing the Android Emulator
Most of the modern Android projects include instrumented tests like, for example, the Compose tests. These types of tests are integrated into the pipelines, requiring a dedicated Docker image for their execution. To do so, weāre going to evolve our base Image and embed an emulator inside. With the emulator embedded in the image, we can initiate numerous instances, allowing us to parallelize the test execution according to our runnerās capacity.
Including an emulator makes the Docker image far bigger so thatās why weāre going to create a separate image based on the previous one (android-build:1.0). As before, you can find the whole Dockerfile in this Gist.
The first thing is installing some additional dependencies for X Windows.
Next up, we have to set up the configuration for the emulator where our tests will run. This involves adding the necessary base image for the emulator along with the tooling.
The last line is just the creation of the emulator named testDevice using the chosen image from the previous step.
Note that you can choose whatever image fits your needs, for example: system-images;android-34;google_apis_playstore;arm64-v8a, system-images;android-32;google_apis;x86_64, etc. Youāll get the whole list of the available images by running:
sdkmanager --list
Thereās one final detail in our Dockerfile: the lauchEmulator.sh script. This is just a handy shell script that launches an instance of the emulator and waits until itās completely booted.
If you want to go deeper down the rabbit hole š³, check out this amazing Open Source project about Android dockerizing.
So, now we have a Docker image with an Android emulator that can handle our instrumented tests. Is that all? Not yet, we need to talk about an essential element that brings our puzzle to life.
The Nested Virtualization
In essence, Nested Virtualization is a mechanism that allows you to run a Virtual Machine (VM) inside an already virtualized environment. This is relevant because we intend to run an Android Emulator (VM) inside a Docker container (another VM). So the hardware of our runners must support Nested Virtualization (NT).
The majority of Cloud providers, such as Azure, provide instances that support NT, but hereās a quick checklist in case youāre utilizing on-premise runners:
- Make sure that your host machine meets the minimum requirements for nested virtualization. A handy trick to check whether the hardware supports Nested Virtualization is by installing cpu-checker and run:
node-linux:~$ sudo kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
- Use a hypervisor that supports nested virtualization, such as KVM (Open source) or Hyper-V (Microsoft). In case you control the hardware, enable nested virtualization in your host machine's BIOS or UEFI settings.
- Configure your Docker daemon to allow nested virtualization, either by using the privileged mode or giving access specifically to /dev/kvm device
Currently, our Android CI is supported by a combination of GitLab runners, Azure VM, and on-premise x86_64 hardware running Ubuntu.
This sole topic is complex enough to deserve an entire article, so if you want to go deeper into the matter, have a look at this documentation:
Integrating the Docker Image inside GitLab CI
This is the easiest step of them all. Registering a machine as a GitLab Runner is pretty straightforward and all you need to do is follow this documentation:
Bear in mind that there are different types of runners depending on whether you want to run processes right in the runner (Shell executor) or just spin up Docker images (Docker executor). You have to choose the Docker executor.
The behavior of your runner can be tweaked by modifying the config.toml file located in /etc/GitLab-runner. This is an example of how it looks in our case:
Also, donāt forget to push the images to your container registry so they can be pulled down by your runners. Once registered, the runner should appear on your GitLab repository page under the section Runners.
And there you have it. Now youāve got a GitLab runner ready to tackle any Android task your pipeline throws its way. šŖ
Conclusion
Containerizing all your Gradle tasks is a significant leap for your pipelines, as they are no longer tied to specific hardware. This unleashes the possibility of running in the Cloud, reaping all the advantages it offers: scalability, reliability, flexibility, and more. š
In this article, weāve covered the nuts and bolts of creating a Docker image able to build and run our tests (including the instrumented ones). Furthermore, weāve briefly addressed how to seamlessly integrate this image into the GitLab CI platform.
Do not miss the next and final article of this series where we go over the most useful techniques to speed up the pipelines and improve the overall performance of our containerized CI tasks. š