Testing Strategy in Android — Part 1

Natan Ximenes
Inside League
Published in
7 min readSep 26, 2021
Statue of Sun Tzu, author of The Art of War

"Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat."
(Sun Tzu, 500 b.c)

The meaning of Strategy

Sun Tzu was a Chinese military general and philosopher who became known when he published the book "The art of war" book. This book is about efficiency in the battle field and it describes the most suitable techniques for each situation, guided by a strategy.

Sun Tzu's book is about war, but strategy is not exclusive for these situations, as we can see in it's definition in the Collins Dictionary:
"it's the art of planning the best way to gain an advantage or achieve success"

So, it's all about planning the best with the resources we have, to achieve a goal. And that's exactly what we need to plan a good testing strategy.

Testing Strategy

Quality is a must in any application. We shouldn’t deliver untested software to production, because the users would end up being the testers and they wouldn’t be happy finding a lot of bugs in an application. That’s why testing is needed and a good strategy can help us to test properly an application.

So, it’s necessary to have a clear plan about what we want to test and how we want to test it, without personal bias influence, focused on software specificities and its possible points of failure.

Here are the definitions of what it’s necessary to define in a Testing Strategy:

What to test

It applies to the application components that we want to stimulate, to ensure its expected behavior. The components vary according to each application’s type, architecture and design. It can be a class, a function, an application layer, a Screen and so on. We may choose to test it in isolation or together with other components, depending on how we chose to test it.

How to test

It applies to both the type of tests and the tools that will be chosen.

The testing types usually boils down to these 3: Unit Test, Integration Test and UI/Functional Test. However, it doesn’t mean that they’re the only types of tests available and also doesn’t mean that you will always need all of them. It depends on the type of your application, how you have structured it, and its architecture.

The tools will help us to achieve the quality goals, and usually they will be an automation framework or an assertion library that will make it easy to put your application in the state wich you need to do the necessary validations.

Bias-independent decisions

It is about defining a strategy, based on the impersonality of the implementation, focused on validating the paths that an application algorithm can follow.

A testing strategy decision shouldn’t be based on how easy it would be for a specific person or a subset of the team to implement an algorithm in a safe way, where only some key points would have to be validated. Even if there are only specialist engineers in a team, they will be regular human beings that might fail sometimes.

When it comes to tests, it’s all about the code and not about the person who implemented it.

Quality can’t be taken for granted, it needs to be ensured.

So either what to test and how to test must be a team decision based on facts, not on personal taste.

Let’s see this example:

  • Some may like to use the AAA(Arrange, Act, Assert) pattern for assertions(structure and naming), but other ones may prefer the GWT(Given, When, Then) pattern. Until one of these two options are chosen, the code might have different styles in the same code base and unnecessary discussions would be started at Code Reviews about which one is the right pattern. This might be a nightmare for newcomers in the team, handling legacy code.

A team must have a clear Testing Strategy, that can be used as documentation to guide how the application tests should be written and maintained.

In other words, the definition of a Testing Strategy can also be a team agreement, helping in collaboration to create tests in a cleaner way, reducing the cognitive load of those who will implement tests, and uncertainties such as, for example: Which pattern fits better? Which type of test should be implemented? Which framework/library should be used?

Software Specificities

Application’s specificities that might be related to concurrency and parallelism, frameworks or architecture must be considered in a good testing strategy.

In Android, Activities and Fragments lifecycle are a good specificity example. Depending of how they are implemented, it might be necessary to put your activity or fragment at a specific lifecycle state so an assertion can be made in a instrumented testing.

Another good example, also in Android, is related to Concurrency. Depending on which concurrency framework that have been chosen to handle background and asynchronous tasks, it might impact the way the tests are made. For Instrumented Tests, Unit Tests and maybe for Integrated Tests a rule would be necessary to setup how threads will work during the execution. Also, for Unit Tests, it would be necessary to do assertions using what the framework provides to check the output of asynchronous code.

So, if your application handle concurrency with RxJava, Coroutines, Flow, AsyncTasks(I hope you won’t 👀), or it handles threads manually, each of them will have specific ways to be implemented at your production code, and also they will have specific ways to be implemented at your test code.

So, that’s why application specificities must be considered at your testing strategy, so the right setup can be made to enable your code to be validated. This setup might involve mock, fakes or some other abstractions to decouple parts of the code that are hard to validate or take control.

Also, if a framework, library or architecture looks good for production code, but it make complex or unables tests to be implemented, maybe you shouldn’t adopt it.

Points of Failure

Depending on what feature is being implemented, sometimes, even automated tests can’t completely ensure the quality.

There are parts of an application that are more sensitive than others, depending on the app digital product type. In other words, if a sensitive part has many failures, it could represent a financial loss or some other negative impact on the company.

Here are some examples, where automation might not be enough:

  • If an audio/video streaming consuming app is being built, what’s the point to have 100% of testing coverage, if the stream isn’t executed in a stable way, because of some implementations details(at the app or at the backend) that couldn’t have been covered by the test automation?
  • If an app that do financial transactions is being built, is it ok for an user to be impacted by bugs or operation failures at the moment he’s doing a transaction?

In both cases, even with the best effort to automate tests that help us to ensure the quality, sometimes, we can’t test or reproduce things that happen only at production environment. Even though all tests have passed in CI, some internal or external services might fail in production.

Specifically for these two example applications above, the following strategies could be followed to mitigate the risks:

Streaming App

Strategy for edge cases: Manual Testing, to check the streaming stability in different networks(poor mobile networks and wi-fi). It’s not an easy scenario to automate.

Possible action point: If the connection wasn’t stable enough, the team would implement an adaptive frame rate and bitrate solution, that would select audio/video quality according the available connection speed so the media could be executed without too many freezing, caused by buffering.

Financial App

Strategy for edge cases: Smoke Tests(Unit, Integration and/or Functional), to validate transaction flows. It will represent a subset of the test suite, where they’ll be the most important parts that should never fail. So, if a regression or any other bug is caught by the smoke tests, at the CI, the next steps wouldn’t be executed, until the smoke tests are fixed. It will bring more awareness and quicker feedback to the team. It would be applied for both the backend services and the app.

Possible action points: Run smoke tests frequently, at the CI when Pull Request are opened, and also at the release workflow, to mitigate possible failure points. The app should be prepared to give a proper feedback to the users, even if something unexpected happens.

TL;DR

The act of implementing automated test goes beyond creation of testing classes and choosing testing tools.

The team must decide together, as a team, what to test, how to test and which pattern to follow. It’s important to mind the architecture, design and any other application specificity, to choose the best way to validate an application. Also, it’s necessary to take our tests as our helpers in the quality pursuit, to prevent our users to be impacted by failures.

At the Part 2, we’re going to understand how to properly deal with the Testing Pyramid and Testing Coverage.

Here’s the references that i’ve read to write this post:

Thanks to Samanta Cicilia for inspirate me to write this, and thanks to Kellycroesy for improving my english!

Share the post with anyone who is learning about testing. Also leave a comment here, if you’ve ever had to think about a testing strategy to validate your application!

--

--

Natan Ximenes
Inside League

Senior Software Engineer, Android @ League, passionate about Android development, Agile and Digital products.