Mobile Test Automation — The Bigger Picture

Team Merlin
Government Digital Services, Singapore
5 min readApr 3, 2020

In our previous post, we discussed what Mobile Automation Test (MAT) is. To recap, we want our MAT set-up to be:

  • Business-friendly
  • Scalable
  • Integrated to the C.I. pipeline
  • Relevant

In this article, we’ll dive straight into the specifics of our MAT set-up and elaborate more on each of the points above.

When tests are written in programming languages (e.g. java, javascript, or python), it may not be easy for product stakeholders to translate the code into business context. Hence, Behaviour-Driven Development (BDD) has been adopted in Agile projects as a way to document user stories in the form of use-cases or acceptance criteria.

The BDD syntax is very simple, clear, and concise, following a principle of specification by example (in the form of GIVEN, WHEN, THEN statements) — this is why we call them business-friendly.

A BDD example

See? BDD is very simple right? The English-like syntax makes it easily understandable by anyone, even for product stakeholders!

It is important to update these tests during the course of making changes. And while updating the tests, the team is also updating the live documentation in tandem. This will cancel out the need to manage a separate documentation of your test scenarios/test cases, etc.. Your tests are now your documentation. Isn’t that awesome? 😊

As the project development progresses, more test cases will be written. Thus, the MAT set-up has to be able to scale (in terms of size and platforms) accordingly. We want to write more tests when required and run them with speed and across different mobile devices.

Test design is a major factor of scalability!

Imagine writing over 1000 tests for an application with 50 features. In such situations, the test framework set-up has to provide:

  • Guidelines on how to structure and organise our tests
  • Capabilities to run tests in parallel

Apart from the MAT set-up, the QEs have to huddle with the developers to set a clear direction on the design and encourage cooperation and ownership to keep our tests running effectively.

Well, QEs shouldn’t be the only one taking care of the product quality; it should be the team’s responsibility to ensure that there’s quality in the product.

When the team has written quite a large amount of test cases, it doesn’t make sense to be running these tests manually. This is why Continuous Integration (C.I.) integration is the key (especially for long-term projects) and a crucial goal in any test automation strategy!

Once the written tests have no further changes, we’ll move them to the C.I. server so that the tests are executed automatically using appropriate triggers (e.g. when developers check-in their code). In this way, the team will know whenever someone’s changes break the automation tests… 😏

In the IT industry, adapting to changes is crucial. As technology around us gets better, it only makes sense that we improve our testing toolkit from time to time. This means updating the test libraries, frameworks, connected devices, operating systems, and whatever that gets the job done in a better way.

The same applies to our MATs. Our MAT set-up utilises a whole range of libraries, servers, and components that are updated from time to time. We have a modular approach which allows us to do plug-and-play, with the release of a new version of those modules. This allows us to stay relevant and up-to-date.

Once the MATs are set up, we’ve to measure the success and the ROIs of the tests.

Coverage is a double-edged sword. Whenever someone mentions the word “coverage”, one will assume it means covering as much as we can. This understanding isn’t wrong. However, in UI testing, you’d want to strike a balance and cover the most critical end-to-end flows first and possibly move the remaining tests to the service-level and unit-level.

Why? You may ask…

The Testing Pyramid

As you can see, UI testing is expensive (time-wise). Therefore, you would want to keep your UI test suite as compact as possible.

New features mean adding more tests to our codebase. But it is pointless to do so when no one’s maintaining them, especially the old test cases! As we add new tests to our suite, we can’t assume that our older tests are still relevant. We need to look at our test suite as a whole and ensure that it remains effective.

Although our MAT framework should guide us to write tests in BDD format, the way to structure your test scenarios/cases determines its maintainability in the future. It is important that we create reusable functions inside our step-definitions in cucumber/robot-framework and write more streamlined test scenarios on the feature-level so that they don’t overlap each other and confuse our readers.

Lastly…

Nothing is useful if the tests are unreliable. We simply cannot allow the C.I to turn red or fail just because the tests are flaky or inconsistent. Thus, more time is spent analysing the tests to make it more effective and trustworthy so that the entire team can depend on it to give an informed status of the application.

Applications are built incrementally and iteratively and we need to consistently propagate those changes in our tests to keep them relevant. Hence, it is important to have a reliable, scalable, and easy-to-maintain test framework. And by making these tests automated (i.e. integrated into the C.I. pipeline), it saves the team lots of time and memory effort (for remembering to run the tests)!

Are you into MAT as well? Share your story with us or drop us your comment(s) below!

Stay healthy and safe~ Remember to wash your hands!

Merlin ❤️

--

--