Applause crowd testing platform: Testing in the Sprint

Nataliya Kostova
Tide Engineering Team
6 min readJun 3, 2020

With over 2.5 billion active Android users on more than 24,000 different devices and about 1.4 billion active iOS users with 25 different iPhones, we have passed the point in time where a mobile application can be tested on every single device to ensure it’s compatibility and stability on them.

Background:

Crowd testing platforms are becoming more and more popular as they provide companies with the ability to test on various devices, platforms, mobile carriers. Also, the testers using the platform are of different backgrounds, level of experience and expertise.

With a small functional QA team with the need to focus on new features and release testing and automation for the mobile UI layer still in the early stages, our developers need a safety-net for the sprint work they are producing each day. Thinking about lowering issues found during our sprints and not exhausting our in-house QA’s engineers with repetitive regression testing every release cycle we introduced Applause crowd testers into our Sprints.

Trial and error:

Cooperation with Applause is really easy and we were able to figure out the optimal frequency of execution during the sprint so that we could get the best value out of the test cycles.

At first, we started with 4 test cycles per Sprint.

Executed from Day 4 through Day 7 we noticed that there weren’t as many changes pushed by the development team between the separate builds. On the other hand, by the time the bugs have been moved from Applause to our project management system and prioritised, there was no time to fix them before the next cycle. In the end, this wasn’t providing the optimal outcome for us so we adjusted the number of test cycles.

Our next approach was lowering the cycles to two per sprint around the middle where there would be changes pushed and the development team would still have time to process and fix possibly critical issues. Thus, our second attempt at finding the best solution was to execute them on Day 5 and Day 7 of our sprint.

Two cycles per sprint was our optimal amount of time, however the days of execution were too close to each other and Day 7 was not exactly what we needed. We noticed that on Day 7 of the sprint our developers were pushing more changes to our develop branch and with Day 9 being too close to the end of the sprint with not much time to focus on bug fixes before the release candidate we focused on using Day 8 as our second test cycle. At this point, this was the optimal test cycle on day of sprint for our particular needs.

The process flow:

In more detail our process looks like this:

  • Cut from latest Develop branch is done nightly and builds that can be uploaded to Applause are being produced;
  • On Tuesday and Friday mid-Sprint we upload the build to Applause and start a Testing Cycle. Each cycle is 4 hours long from 10 AM till 2 PM. That way after 2 PM the QAs have time to review the defects raised by the Applause testers and export them to our project management system;
  • New issues that are found during testing are logged into Applause and then verified by TTLs on their side. A backlog of known issues is kept so that there are no duplicates raised, which can be retested on request to verify it has been fixed;
  • Review of issues is being done by in-house QAs and then are Approved or Rejected according to our Business requirements;
  • Valid issues non-existent in our project management system are being Exported;
  • Confirmed bugs are being Prioritised and Fixed by our Development team.

During the Test cycles in Applause testers have time to identify newly introduced UI issues, changes to the flow, crashes of our app, etc. and log them. Later after the QAs export them to our project management system the Development team can quickly fix these issues if deemed critical, so that they do not proceed to the Release candidate.

The Test cycles include predefined Test cases to be followed and also exploratory testing the application so that a variety of issues can be identified. All testers provide proof of execution with screenshots and all defects have logs attached to them for easier identification of the root cause. We send out our most critical areas as key areas to be tested each cycle — Onboarding, Recovery of accounts, Payments and Cards. That way we are ensuring that our main functionality has not been compromised during Sprint work on new features or refactoring old legacy code and also that major issues like crashes do not go out to our members.

What we gain:

  • We can identify issues on specific devices/operating systems;
  • We can identify critical issues like crashes on startup or during specific end to end flows;
  • Our Developers are more confident providing their features knowing that they were tested by multiple testers before going live;
  • We are provided with Reports on how many users have tested our app, the different devices that have been used, etc.
  • Any issue can be retested to verify bugs have been fixed;

Bumps in the road:

Even though we tried embedding a clear process flow there were some uncertainties and some parts of the process that were not as well thought through, such as:

  • Specific Sprint changes needed to be clearly communicated each cycle so that the testers know the affected areas of our software and concentrate on those areas;
  • We don’t have a specific project in our project management system and thus all raised issues have to be manually exported one by one which leads to slower identification of bugs and them still going through to our next release;
  • With no specific Test cases provided from our side the testing was mostly exploratory and testers seeing our app for the first time didn’t provide as many high-valued issues as we were expecting;
  • A dedicated user on our side should be assigned to unblock the Crowd testers for each cycle as most tests require approval — this does take up time and effort and if not automated, as it is in our case, it is slowing down the process.

Conclusion:

The flexibility and easiness of use of the Applause team makes the process of identifying the best flow to meet our expectations simple and effortless. Testing on multiple devices by users with a variety of testing experience does bring out different kinds of flaws in our app whether that be bad user experience, malfunction during exploitation, crash or just a simple UI issue. Overall using a Crowd testing platform like Applause helps us create a so-called ‘safety net’ for our Developer teams both on the FE and BE for the products that they ship.

In conclusion, I would say that having a clear predefined process is crucial to good collaboration with any third party. Test plan with well-written test cases, the how and when of execution and a team dedicated to providing good quality products is the key to successfully integrating any outside party into your Sprint work.

--

--