Automated & manual QA as one integrated process

Lizi Boros
Supercharge's Digital Product Guide
8 min readAug 30, 2021

Get the best of both testing worlds through reporting

Woman tapping on a mobile phone screen
Photo by Rob Hampson on Unsplash

If you’ve ever worked on a project where the end goal is to provide quality applications for clients, you know how important proper testing is. Now imagine the significance on a much bigger scale that only grows as new clients come in, and better and better solutions are required. At some point, it’s time to spice up your good old manual testing with automation to keep up with the higher and higher QA demand.

But how to go about it? This article attempts to tackle that problem by offering up one viable solution, focusing on the test reporting aspect.

We’re not delving into the differences between the two types of testing or outlining the pros and cons of each, or even the merit of using them side by side, as there are several sources about those out there already.

So let’s take a high-level look at the problem of integration, figure out when and where it could be relevant, and walk through our implementation of it.

To automate or not to automate

When it comes to adding automated testing to a project, among the first considerations is this question: is it worth it? Even though that is not the topic of this article, it’s something to think about once a project reaches a certain scale.

Which is exactly what happened in our case. The project in question is a big one, so big that several manual testers have been working together on it already, due to the frequency of releases and thus the severity of the testing demand.

This is because we have many different customers that we provide with the same core solution. That means regular release builds of multiple new applications, as well as adding occasional new features to the older ones, often at the same time. This kind of setup, while maintainable with an ever-increasing amount of people on the team, is difficult to keep up with in the long term, with only manual testing.

Add to that the fact that these apps share so many of the same functionalities, including the entire onboarding flow and many other features as well, that it makes sense to cover those overlaps with automated tests. That way the automation framework and testing code can be reused between apps with only minor adjustments, while saving time for manual testers by narrowing their responsibilities, making our job easier in more ways than one.

First steps

Once we’ve decided to bring automation into the mix, the next step is to figure out how.

As we already had an existing manual testing process in place, part of our job was to fit the automation into that world. We wanted to make the QA team’s life easier, instead of even more complicated.

The quest

This is where most articles on the topic cite challenges when it comes to integrating testing efforts. Among these worries are separate teams, gaps in knowledge and communication between them, testing the same things twice as a result, or even worse, leaving something out. These are all valid concerns, and the goal was to find a way to minimize these pitfalls, while still making the most of the mixed testing approach.

To that end, we looked for tools that would enable us to integrate the result of the automated tests right into the manual test executions. That way, automation isn’t completely separate and manual testers can use the results of it to their advantage. This can reduce time spent on regression testing for example, while making sure that nothing gets overlooked.

Weapons of choice

What we needed for this was two-fold. First, a CI tool that is capable of running our tests on remote devices and reporting the results back to us.

This was the easy part, as that was already a given: we use Bitrise on the project for builds and release jobs, so the easiest solution was to use the same tool for automation jobs as well. That said, any old CI works as long as it provides the option to include a custom script among the steps, which we will talk about later on.

Second, we needed to find a way to integrate our automation results into the existing manual workflow, so that they would show up in the same execution. That way, both testers and project managers could check the results of all testing at the same time, and with a little handover, the manual team would be able to follow up on failed tests as well.

This is where Xray, a Jira testing tool, comes in. The reason it was selected is that it can handle both manual and automated test cases in the same repository, as well as the same test execution, which is precisely what we were aiming for. Now let’s see what it can do!

Photo by Farzad Nazifi on Unsplash

Bringing it all together

Our integrated testing approach with Xray starts like any other: create an execution, add your manual test cases, and start testing.

However, what we wanted was for the status of the tests to be set by our automation test job, using a mapping solution provided by Xray. Then, manual testers would only have to execute the remaining tests that were not covered by automation, as well as follow up on the failing ones.

This is what that mapping solution looks like in practice.

Recipe

The ingredients we need are the following:

  • an API key from Xray
  • an XML report generated by the CI
  • and a little script.

The following code is included in the steps of the automation job that runs the whole operation. It leverages Xray’s REST API to create (or replace if already existing) test cases and set their status according to the automation results — which are coming from the XML report mentioned above.

if [ -n "${EXECUTION_KEY}" ]; then
json='{"client_id": "'"$CLIENT_ID"'", "client_secret": "'"$CLIENT_SECRET"'"}'
echo $json
#Send authorization and GET the token
token=$(curl -H "Content-Type: application/json" -X POST --data "$json" https://xray.cloud.xpand-it.com/api/v2/authenticate -v | sed 's/^.//;s/.$//')
echo "token:$token"
sed "s/classname='UITests./classname='/" $BITRISE_DEPLOY_DIR/report.xml > $BITRISE_DEPLOY_DIR/report_changed.xml
#Send junit result to XRAY
curl -H "Content-Type: text/xml" -X POST -H "Authorization: Bearer $token" --data @"$BITRISE_DEPLOY_DIR/report_changed.xml" 'https://xray.cloud.xpand-it.com/api/v2/import/execution/junit?projectKey='$PROJECT_KEY'&testExecKey='$EXECUTION_KEY'' -v
else
echo "EXECUTION_KEY not found"
fi

For the script to work, we also need to pass some values to the automation job via Bitrise, including the Bitrise deploy directory, the client id and secret (which come with the API key), and the project and execution keys that belong to the execution where we would like our results to be reported to. These can be defined either as environment variables or passed as parameters to the job itself.

Challenges

Like in any new endeavor, things didn’t entirely go as planned.

As anyone in IT would understand, we originally looked for the simplest solution possible. This would have been to set the results of the existing manual tests directly, with the help of automation and the aforementioned Xray mapping.

This was not meant to be, as the API solution was not designed to work with manual-type test cases. We ended up using generic, unstructured ones, where no manual testing steps are allowed, making it impossible to use the already defined test set for automation purposes. So that blew our nice, simple idea out of the water. 🙃

However, it did not stop us from forging ahead and trying to find the outcome closest to what we had envisioned. In the end, we kept both sets of tests in the execution and came up with a linking solution that lets us keep track of which automated test covers which manual one, ensuring accountability for all of them.

That sounds a bit complex, so let’s break it down.

In practice

So we’ve got our test execution. Check. ✔️

We’ve also got all the manual tests that we’ve been using before automation came into the picture. Check. ✔️

And once we’ve run the automation job at least once (with the project key and execution key parameters included), then we’ve also generated our automated tests as a result, with their status set to passed or failed as the case may be. Check. ✔️

Now all that’s left is to run the remaining manual tests. To make sure that we don’t do any extra work, there are two more things to do here before we start.

First, the linking. Xray being a Jira integrated tool, each test case is essentially a ticket, enabling us to use the link issues feature between them. This way, we make sure that all automated tests are linked to manual ones, which may seem tedious as it needs to be done by hand, but the good news is that we only need to do it once. After the initial setup is completed, it all runs like clockwork.

Second, the flag. Xray also offers the option to label test cases, either with existing or custom ones. This is what we decided to take advantage of by setting the tests covered by automation accordingly with a custom Automation Implemented flag, so when the execution is created, only the ones not covered should be included in the manual run.

This way we can ensure that manual testers only test what automation doesn’t cover, significantly reducing the manual load. In addition, the linking also works as a failsafe. In case something fails in automation (and it will), QA can open the linked manual test and follow up on the result to make sure that something is amiss, and create a ticket if necessary. That way all bases are covered.

Closing

So to recap, we went over the process of one possible solution to integrate automated testing into an existing manual process to reduce manual QA efforts, and we did that through combining test results in the same execution.

Through our example, we’ve seen how an API tool can be used in combination with a CI to map automated test results to an execution, while still keeping the safety net of manual testing close at hand.

Along the way, we’ve encountered unexpected obstacles, which is bound to happen on any project. Our advice would be to always start small, experiment, and then bring your solution to the project once you know what it’s going to look like in practice. This is what we did, and it paid off.

So, to circle back to the question posed in the beginning: is it worth it? The short answer is: not for every project. But in our case it was. 🦾

Special thanks to Gábor Tachtler, as well as Zoárd Kéri, Réka Kosik, and Dávid Kovács.

At Supercharge, we are a next-generation innovation agency working with our clients to create transformative digital solutions. If you liked this article, check out some of Supercharge’s other articles on our blog, or follow us on LinkedIn, and Facebook. If you’re interested in open positions, follow this link.

--

--