If you’re an Android developer in a large team and you’re running tests for your code then you’ve definitely faced a situation where your build spends quite some time in the queue until you get the test reports.
Let’s face it, until you have a fast cycle of develop -> commit -> build -> test -> (*deploy) -> repeat you can not succeed.
Of course you want CI to be as fast as possible. But that’s not the only important thing, you want to scale your build machines easily (maybe even on-demand). What about the famous “works on my machine”? Maybe you’d like the build to be the same independent of where and who actually built it?
Suppose I have clean installation of OS. What do I need to do to support the flow of CI? It seems trivial at first but in reality there are complex dependencies here. Let’s take a look:
- Spawn Android dev environment
Android dev environment
- JDK 7 & 8
- Android SDK
- Release keychain, google-services.json, etc…
Apart from installing all these you also need to keep them in sync on all your CI instances. Frustrating? Definitely. You may be tempted to write bash scripts that ssh and upgrade everything but you know that sometimes you will just forget to update. Docker can help you with all of that.
Quick intro for those who don’t know what docker is and what the fuzz is all about. Docker is basically chroot on steroids. You can distribute a docker image of environment you want to use (much like VM) and run commands inside the isolated environment, but compared to actual virtualization solutions it has these nice benefits: startup time is almost instant and the image size is very small.
Part of docker infrastructure is a Docker Distribution(or docker registry) which takes care of sharing and versioning your images. You just updated build-tools to a new version? No problem, just upload the image and everyone (incl. CI agents) will use this new latest version.
So updating the environment will look something like this:
- Update your Dockerfile
- Push to VCS
- CI will build new image and push to docker registry (this can also be done manually, but why not automate this also?)
Assembling Android application with docker usually looks something like this:
docker run -v ./app:/opt/app docker-ci-android:latest gradle assembleRelease
what happens here:
- docker run — runs a command inside the container
- -v ./app:/opt/app — mounts the source-code of app inside the container
- docker-ci-android:latest — this is the name of your docker image for building and it’s version
- gradle assembleRelease — command to build the app
There are many frameworks that one can use for testing but basically we have 2 types:
- Requires JVM (JUnit, Spock, Robolectric, etc.)
- Requires Android (Espresso, instrumentationTests, etc.)
Since we’re already using JDK inside docker JVM tests will run without any additional effort, but with Android tests lies a big challenge. You can always test using emulator which you spin up just for the tests but it’s still x86 Android and this is very questionable testing which leads to false-positive results (why).
So ideally we’d like to test on real ARM powered devices. Do you want to connect each device to the hardware that runs CI instance? Maybe you don’t have physical access to CI instance, what then? These folks already did most of the work and the only thing that was missing is this command-line client to request devices. After combining these two you will get the ability to run anything that requires Android device via network connection, so your CI instance can be in one location and the devices in a completely different one.
So, everything works now, you’ve got your builds published, test reports are all passing, when you need to upgrade JDK or build-tools — you just push to VCS, everything is great.
But what if you need add more horsepower to this?
As for the CI instances you just need to install an OS, docker and you’re good to go.
If you’re feeling more adventurous, then for even easier scaling you can use docker-machine to install docker and add the new agent to docker swarm with global mode service, so that when you add a new instance to the swarm your service will be automatically replicated.
As for devices you will often just need to connect the device. When you’ll reach a hardware limit of the provider of devices you’ll need to create a new provider according to the setup that you chose for deploying OpenSTF.
Why not use available cloud CI solutions like Travis?
Travis is a great general purpose build system which you can tinker with to some extend, but it has it’s limitations if you want to speed things up: not being able to use tmpfs storage for running your build and not being able to setup speedy maven proxy repo (you don’t want to re-download all your jar/aars that you depend on, right?) are such limitations.
Why not use cloud test labs already available like Firebase Test Lab?
If you want real devices then you’re in for a lot of money(e.g. 5$ per device/hour). For small team this may be a perfect solution but as soon as your team grows and you want to have a workflow that tests almost each commit — this is where your own Test Farm will save you a lot of money.
Maybe you also have staging/development API which is only accessible from your internal network, so public cloud device providers will not be able to give you this kind of connectivity.
There are many problems that you’ll face when building such system (from software that is very difficult deploy to hardware issues on specific devices), but the benefits are definitely worth it.