Photo by Brett Jordan from Pexels

The evolution of apps Quality Assurance at Azimo

Our journey, goals, and motivations

Mirek Stanek
Published in
5 min readApr 29, 2021

--

This is the first in a series of blog posts in which we outline our multiple years’ experience with our Android app testing at Azimo. Most of the principles, goals, and achievements also apply to our iOS app.

Table of content

When you work on a long-lasting project, very rarely, things like process automation, CI/CD, or testing tech stack are delivered in a steep-change way. For most companies (esp. startups), it’s impossible to freeze product development for the next 3–6 months and cover the entire app with tests. The only way is an evolution, not a revolution. Move slowly week by week, step by step, towards long-term goals that serve the company’s needs.

When we started developing the Android app (2014/2015), our MVP had zero tests. It’s not an unusual case for the company, which has minimal time to launch a product to gain traction. Back then, on mobile conferences, we could see companies like Spotify, which tested its app 24/7 via monkey runners trying to crash it, or Google, presenting UI tests using fake servers. With no single unit test in our project, we could only dream about these solutions.

Today, even though we don’t have 24/7 monkey runners and our goal is to limit UI tests as much as possible, we are proud of our testing solutions and processes around it. Our testing stack has thousands of unit tests, hundreds of functional/end-to-end tests, and runs its jobs on cloud-hosted emulators.

This series isn’t only about the final effect of our work. We want to walk you through our journey from the project with no unit tests to the place where we are now. You will read about what we built but also what was our reasoning behind these solutions.

Let’s begin.

Step one — unit testing

In 2016 we aimed for +70% unit tests coverage for our both mobile platforms

During the first years, we focused on the base of the testing pyramid, which is unit testing. Quarter by quarter, we set milestones for test coverage (around 10–20% each period). At the peak, we achieved something between 70% and 80%. Even though there is criticism around using code coverage as a goal, we could find the correlation between this metric and the app quality in the early months.

See great insight from Martin Fowler about test coverage metrics.

Criticism often says that it’s too easy to fake the coverage reports by writing poor-quality tests. That’s true, especially if you don’t have a good motivation for writing them. “Unit tests are good, all engineers should write them” or “our standard is to have >70% code coverage” aren’t good reasons.
So here were ours.

Our motivation behind unit testing and tests coverage

We needed to speed up the development cycle. Sometimes, It took us multiple weeks to deliver even the most straightforward changes to production. One of the bottlenecks was the cooperation with the centralized QA team. We needed to book a day in their calendar, and when the day had come, the app was manually tested from the ground up. We got the list of bugs from them, came back to development, and when engineers fixed everything, the process repeated. Sometimes testing and fixing alone took us weeks before the app was released.

Just imagine what happened if QA found just one bug which restarted the entire cycle 😱. We’re not sure who felt worse — testers or developers. For sure it wasn’t a good cooperation. It was “them” vs “us”.

The reason to write unit tests and increase code coverage was to minimize back and forths between developers and the team of QA, in particular:

  • Make sure that the bug, once found, will never be repeated,
  • Test the logic which is hard to reproduce in a testing scenario (e.g., ask for app’s review after a couple of days),
  • Test tedious things that shouldn’t bother QA engineers whenever they test the app (is min-5-characters validator still working?),
  • Increase code loose coupling, so changes introduced in module “A” don’t break anything in module “B” (“why login screen work blows payment process?? 🤯”).

We didn’t set our code coverage goals without the context or because other companies have them. We acknowledged our limitations (no dedicated QA engineers in a team), bottlenecks (access to QA testers), and company needs (improve leading time from weeks to something better so we can iterate over our product more often). Adding unit tests was the only solution for us at that time. Code coverage metrics supported us in tracking the progress of our journey.

After long months of work and the coverage oscillating around 50–60%, we started noticing some real benefits like:

  • Noticeably fewer crashes and bugs in the app (our crash-free ratio went from 95% to 99% between 2015 and 2016),
  • Better code modularisation, which led to faster, less error-prone app development and…
  • …the possibility to open full-time QA engineer roles in apps teams.

We will cover these in the next blog post. Stay tuned!

Towards financial services available to all

We’re working throughout the company to create faster, cheaper, and more available financial services all over the world, and here are some of the techniques that we’re utilizing. There’s still a long way ahead of us, and if you’d like to be part of that journey, check out our careers page.

--

--

Mirek Stanek
AzimoLabs

I empower leaders through practice 🛠️💡✨. Site Leader and Director of Engineering at Papaya Global. Ex-Head of Engineering at Azimo.