How to Pick Test Cases for Automation

Rampraesath Kumaran
Fave Product Engineering
5 min readAug 10, 2020

--

A few weeks ago, I gave a talk on this topic for Ministry of Testing Kuala Lumpur meetup, where I was the speaker. After the talk, I wanted to further enhance the reach of this topic to fellow mates who might need it. So instead of merely sharing the slides, I’m writing this article to provide better context. I hope it benefits you.

One of the many challenges QA will face when automating their testing is selecting test cases which can range from hundreds to even thousands of test cases. I met this problem as well when embarking into the journey of building my first automation framework. After doing plenty of research online, I had trouble filtering those test cases because the process would take a tremendous amount of time for one thousand test cases. So I decided to come up with my recipe and tested it to see if it works.

But before diving into that, let’s take a look at this famous buzz question ongoing in test automation field right now.

Is 100% test automation possible? My take on that is never! Because no machine can automate a tester’s instinct. A famous author quotes my favourite analogy to this:

Autopilot in airplane cannot replace a pilot. It’s there only as an aiding tool for the pilot. Same goes to automation and testers.

Pradeep Soundarajan

Now that we are left with areas that can be automated, should we automate all of it?

Of course, we can automate everything that can be, but there are factors which must be considered. Money, time and deadlines are the typical constraints in the work environment which usually makes it hard to achieve. So what do we do? The smartest thing to do is automating the “right” test cases.

But how do we pick the right test cases?

THE SCORE MODEL

The famous Angie Jones writes this recipe. All you need to do is give points for each test case based on criteria such as Gut, Risk, Value, History and a few more. Once you have the score, you use the range decision model as below to decide whether to automate the test case or not.

THE MODULARISATION MODEL

This modularisation model is similar to Angie Jone’s specifying the score but instead you give a “YES” or “NO”. Another difference is that this model considers some extra factors such as complexity of test case and repeatability.

These two recipes are incredibly useful as it gives metrics of measurement for clear decision making. However, the task of giving points for a thousand test cases seems like an exhausting task. Upon mustering some confidence to experiment, I came up with my very own experimental recipe to pick the right test cases to automate.

As funny as it sounds, this is the best mnemonics I manage to come up.

As you can see above, each word represents a process. I came up with these mnemonic to remember them easily. Now let’s look at what each process means.

Taking a step back is crucial because when you are wearing the “automation hat” high chances, you will be inclined to immediately select the most exciting or most repetitive test case to automate. Instead of jumping into action, visualise the whole system, understand the interactions between the components and the meaning of it.

The second step is identifying the application’s goal and selecting what are the core functionalities and use cases needed to support the application’s goal.

Once you have identified the core functionalities and the test case areas of the application, next is to think and list down the possible combinations of those areas.

To avoid selecting ALL COMBINATIONS, filter out the combinations based on the ingredients/factor such as complexity, long testing, criticality, risk and so on. The weightage of each factor depends on the type of application we are testing.

If we are testing health care or finance-related software, risk and criticality will carry a very high percentage. In comparison, if we are dealing with blogs or simple listing websites, data-driven factor carries higher weightage and risk will be relatively lower.

Apart from selecting test cases, we must also eliminate test cases which should not be automated. We can stop them based on specific criteria such as test cases related to A/B test, a feature that will be updated in a short time and features that has low usage frequency.

Low usage frequency test cases do not mean it should not be automated. If you had to prioritise, probably you could least prioritise it and handle them later on during your extra resources.

As you can see, these are the conclusion. It strongly revolves around the understanding of the system as a whole rather than diving into thousands of test case straight.

Based on my experience, I strongly recommend building your recipe by using other materials as the only guide, including mine. This is very important as there is no one fit solution for all. So be brave to experiment with your methods and learn from others as well.

I am hoping to see you share your experience on how you selected the right test cases to automate.

Originally published at http://synapse-qa.com on August 10, 2020.

--

--