Witness the (Android) fitness

Paul MacD.
ASOS Tech Blog
Published in
6 min readMar 15, 2019

Over the past two years we’ve completely changed how we do things in the ASOS Android team. Before our transformation, our dev process had little structure, our automation was limited to unit tests and it took ages to publish a release because we were reliant on manual regression testing. Sound familiar? Then read on!

Back in September 2016, the Android team was just coming to the end of rewriting Checkout — an enormous project. This crucial piece of work aimed to bring the code, UI and functionality up-to-date for the most sensitive area of the app. The team was also growing rapidly and we had the numbers to split into two dev teams. With Checkout complete, we then had sufficient breathing room, resources and a strong appetite to make some long-overdue changes.

image by barika123, courtesy of pixabay.com

So let’s take a step back and look at where we used to be in terms of process, automation and releases:

Dev process:

We had two Android teams working on the same code base and we were merging all changes to the same shared branch before testing. This presented a few issues:

  • Untested code/features were all merged into the shared branch, so nothing was tested in isolation.
  • Bugs were visible to all, which caused delays or could even block the testing of unrelated functionality.

Automation:

We had a small number of Appium UI tests, but they were slow and unstable, so never used. However, unit test coverage was good so we had a high level of confidence in the stability of the underlying codebase.

Release:

We would cut a Release Candidate (RC) branch about a week before release and build a release candidate. We’d then embark on two-to-three days of manual testing using test scripts across multiple device/Android OS combinations. This would require all of the Android QAs, a few devs and maybe a couple of iOS QAs, if needed. Every release was a big one and this laborious process meant that we could only release every month or two.

Things had to change, and so the discussions began. We have a lot of talent and experience in the team and we drew on this to create a vision of our ideal Dev & QA process. We also documented our proposed changes so that our leaders had visibility on what we were up to. Our vision was for continuous integration; creating a solid suite of automated UI tests was the top priority, but we’d also need to change our process to make best use of them.

Let’s look at the changes we made in the first few months:

Dev process:

  • All stories/features to be tested in story/feature branches, in isolation, before merging. This includes manual exploratory testing against the acceptance criteria plus verifying the existence of suitable automated tests.
  • The shared branch effectively became our ‘stable’ branch.

Automation:

  • Every new story/feature has to include automated UI tests as standard, (using mocked API responses) — no exceptions.
  • We began using Espresso, the Android native testing framework, because it’s faster, cleaner and more flexible.
  • All UI tests were run against the story/feature branch before merging to the shared branch.
  • We hooked Jenkins up to Github so that every Pull Request (PR) merged to the shared branch automatically triggered a full run of all UI tests.
  • We set up our own Jenkins CI machine to run all of our lovely UI tests using a library called Fork.
  • We also made an immediate start on adding UI tests for all areas of the app lacking coverage.

Release:

  • We started ‘bug bash’ sessions. A few days before a release, all of the Devs & QAs install the RC build on a range of devices and try to find bugs through focussed exploratory testing of new features.
  • My fellow QAs and I also created an end-to-end test suite (using the live API) to cover the core user journeys in our app. These tests are performed prior to release as a final smoke/sanity test; the ‘icing on the cake’ in terms of confidence. These tests have saved us on more than one occasion!
  • We still did a bit of manual ‘scripted’ testing, but we managed to cut it down, bit by bit, as our changes took effect.

So far, so good. We made huge changes in all areas and, best of all, the burden of manual testing had been greatly reduced without sacrificing quality. We were still growing in numbers, so we created a third Android dev team. This was the first big set of changes, but we wanted to go further. There were also a few creases we needed to iron out:

  • Some of our feature branches became a bit too big and long-lived. This meant constantly having to update the branches and resolve conflicts.
  • We used multiple Android emulators to run our UI tests but these were unstable due to the machine running out of memory all the time.
  • Running all UI tests for every branch was taking too long. Test run time increased as we developed more features and created more and more tests.
  • We didn’t have great visibility of test failures on the shared branch and it was unclear who should take responsibility to investigate and fix them.

So here’s a summary of further changes and improvements we’ve made:

Dev process:

  • We try to keep story branches as small as possible and merge them ASAP. This is made easier by trying to make the stories themselves as small as possible.
  • We use local feature flags in our test app to enable/disable code for features that are not yet complete (we also use remote feature flags to enable/disable completed features in the live app).
  • The Green Rota: each team takes it in turn, week by week, to take responsibility for investigating and fixing test failures on the shared branch.

Automation:

  • TeamCity (Continuous Integration and Deployment server) now handles all of our build configurations, from running unit & UI tests to generating the Playstore build.
  • TeamCity automatically runs our 8.5k+ unit tests for every PR. Merging is blocked without a successful build.
  • We run a ‘QABuild’ for every PR. We’ve added tags to our tests so that we can now select the tests relevant to the changes made. This saves us a massive amount of time; — 10minutes vs 1.5hrs to run all tests!
  • When a PR is merged to the shared branch, we trigger a larger set of ‘essential’ tests to ensure app stability in key areas.
  • We also have a nightly build, which runs all 1200+ UI tests.
  • More devices for CI: we now have two ‘pools’: one for QABuilds and one for test runs on the shared branch. We just switched from Fork to Spoon for a bit more speed and flexibility.
  • We’ve got TeamCity hooked up to Github and Slack to notify us of build status for individual PRs and shared branch builds respectively.
  • Over the past 12 months we’ve added, on average, 205 unit tests and 44 UI tests per month.

Release:

  • We now release to our Beta users every week!
  • Release captains — we all take it in turn to coordinate the weekly release.
  • We still do our Bug Bash sessions for every release and we’ve cut these down to 30 mins.
  • We run all of our UI and end-to-end tests prior to release.
  • No more manual ‘scripted’ testing!
image by andrekheren, courtesy of pixabay.com

I believe that the biggest factor in all of this is confidence. All of the process changes — writing unit and UI tests, making sure we run the right tests and perform the right kind of testing at each stage — have given us the confidence to release every week to our Beta users. In addition, our Beta users give us yet more confidence that the app is stable ‘in the wild’ before we roll it out to the rest of our users.

We’ve done great things, but there’s always room for improvement! We continue in our efforts to make our process smoother, our automation faster and our releases bug free. Any feedback or suggestions for making our processes even better are most welcome!

Paul MacDonald is a Senior QA in the ASOS Android Team and he’s been testing apps for over 10 years. When he’s not staring at phones, Paul enjoys a bit of cooking, too much TV, not enough DIY and spending as much time as possible with his kids.

--

--