iOS Test Automation @WalmartLabs

Chaoyi Chen
Walmart Global Tech Blog
6 min readFeb 10, 2017

--

In the article, Elements of Successful Massive Scale Automated Testing, UI automation testing in general has been summarized as just plain sucks, which is doubly true when it comes to mobile.

However, the article also lays out the test automation process strategies here at WalmartLabs: Massive Parallelism, Early Stage Testing and Deterministic Behavior. In this article, we’ll take a a closer look how Walmart iOS test automation makes effort to achieve these three basics and serves as a fast feedback system to the developers.

The Big Picture

Walmart iOS Application is modularized into different components: Search and Browse, Cart, Pharmacy, In Store etc., developed by different teams, with different selection of technology stack. Here we will use the Cart component which is written in React Native as an example.

The iOS automation test solution we provided within WalmartLabs is based on Appium and its Javascript client. Test cases are written in Node.js and run with Mocha. We chose a Javascript binding solution not only because of the React Native technology stack choice in the Cart component, but we also want to align with our Web frontend automation test solution which is also in Node.js, and make use of the toolchain provided by TestArmada, which we’ll see in the following.

For the test execution, we use SauceLabs’ cloud-based platform so we could be exempt from all the virtual machines and different versions of iOS simulators maintenance.

From Slow to Parallel

How does running mobile automation test in serial compared with in parallel at first glance?

Let’s take a closer look at a current CI job build log sample from React Native Cart team:

CI Job build log

There are 6 steps in this CI build triggered by a pull request:

  1. npm packages installation
  2. Code linting
  3. Unit test
  4. Create React Native bundle
  5. Copy it over to the application

The last step is executing all 84 UI automation tests in parallel remotely in Sauce Labs’s virtual machines, which includes:

  • Compress the application bundle and upload it to the remote cloud
  • Wait for VMs in the cloud to start up and initiate simulators of specified iOS version
  • Install the uploaded bundle to the simulator
  • Start test execution in parallel with 3 automatic failure retries

How long does it take? With the help of Magellan, it is in the order of 10 minutes:

Test result analytics dashboard

Magellan spins 84 parallel workers (processes), which makes all test cases run at the same time. As the developers continue to add more test coverage, we monitor the test result analytics dashboard (which is another TestArmarda tool integrated with Magellan called Bloop) closely and adjust the Magellan parallel worker number and CI structure in time.

From Flaky to Deterministic Behavior

This is probably the most tricky part in the mobile world. Especially for an application that interacts with so many different kinds of backend services. A carefully designed deterministic test scenario is not enough. Automatically retry is not ideal or enough either. Services flakiness is still like a ghost, it will keep haunting the tests unless you eliminate them with mocking.

We use Shifu, a WalmartLabs home grown mocking solution to achieve this. Simply said, it starts a mocking server with pre-recorded dummy data to replace the real services. The tricky part lays in two parts:

  • Remember we use Magellan to execute all tests in parallel. How to route different responses to the same backend service URL for different test cases in parallel? e.g. Test A needs 6 different items in the Cart, while Test B needs empty cart. They both hit get cart backend service at the same time, but obviously they need to receive different mocked responses to populate the cart. Currently we start one Shifu mock server for each test at a different port. Luckily Magellan has such a network port management feature to help to easily assign mocking port to each parallel process.
  • How to tell the mobile application to hit the mock server with the assigned port instead of real services? When starting an application in simulator, we can pass in process arguments. e.g.: -mockingUrl=localhost:13000, which gets saved into an object from NSUserDefaults class, that’s where we tell the application to use this value instead of real services for specific backend service calls.

We will have a deep dive series on the mocking techniques used here at WalmartLabs. Stay tuned.

From Chaos to Early Stage Testing

CI build sample screenshot

React Native Cart team has made UI automation tests run for every single Pull Request, which provides very early feedback as well as confidence to the developers. Plus all the tests were written by the developers themselves, who have full control over test coverage and are able to respond to test failures very quickly.

Test Result Feedback

To present the test result back to the developers, we use Admiral dashboard, another open source tool TestArmada provides.

The following screenshot is the overview page for the React Native Cart project. It shows the big picture for test result of each PR verification:

PR test result overview page

If you click on any of the PR links from the overview page, you’ll find all the test cases results for this single PR verification:

Test result details page

Furthermore, if you click on any one of the test result, you’ll see the details of that test run, including CI environment information. And since Admiral is able to embed the Sauce Labs test result link, information like Appium commands and logs, and even the screencast replay could be seen for this single UI test:

Single test result page

As we can see, the Admiral dashboard provides an one-in-all place for everyone that needs test results information in different levels. For any test failure, all the debug information has been included. Most people directly go to the replay to see how the test actual failed.

Summary

At WalmartLabs, iOS automation UI test:

  • Use Magellan and cloud based solution to achieve massive parallelism.
  • Use Shifu mocking to achieve deterministic behavior.
  • It is promoted as a Pull Request blocker to provide early stage testing result feedback.
  • Test results and all related information are shown from Admiral dashboard.
  • Test cases execution statistics can be analyzed from analytics dashboard Bloop for further efficiency improvement.

What’s next

For React Native team, there shouldn’t be any hold back to adopt the same Appium tests for Android UI testing too. We are looking into running Android automation tests in parallel remotely in Sauce Labs emulators along with iOS tests.

We believe there is still room to squeeze time from each test run, through improved UI element locator strategy.

We want to encourage other mobile application tenants to adopt the same approaches too.

We are applying the same strategy to real device farm testing which is coming soon.

--

--