What should an exemplary QA do to plan a product’s testing?

Strategising your testing efforts.

Your guide to help build a quality software!

Karishma
Technogise

--

Photo by Maarten van den Heuvel on Unsplash

Test strategy is all about defining testing guidelines, designing & executing test plans and communicating them with the stakeholders. Consider which aspects of a product to test, by whom and in what timeline.

To come up with such clarity, we need to know a bunch of things, like:

  • What is the product all about?
  • What is the expected scope of testing (in the order of priority)?
  • What is the timeline that we have?
  • What risks does the product have?
  • What are the product’s dependencies?
  • How many team members are testing, and who is testing what?
  • Can team members parallelise their work? Like QA and Dev work in tandem to deliver a story faster?

This list is not exhaustive, but your decision would be based on the answers to above questions. Take it up with the team and propose it to the stakeholders.

The best way to get started with testing is to make sure there are sufficient automated tests — unit, integration, API, UI, performance, security. Run them on a pipeline for all the commits that are checked in. Thus, reduce the test feedback cycle and free up QAs for other tasks that need human intervention like exploratory testing.

Things to keep in mind when working on timelines from QAs’ perspective are :

1.Feature scope and delivery timeline :

This commitment should be feasible enough that the team does not feel they are racing against time to complete the features. Otherwise it hampers the definition of “Done” for stories — letting go of automated tests. In such cases, automation becomes a backlog to tackle with later. I strongly recommend avoiding this.

What if it is not feasible?

Sometimes, delivery timelines turn out to be way too stringent and insufficient. Well, in such a case you need to make do. Can the team at least automate the acceptance criteria (AC) in the given timeline? If yes, then we do it as a part of the definition of done. If no, then we should track it (in agile board) and cover it up in the upcoming iteration.

2. Manual testing :

While testing a story or bug, I prefer to talk to developers and get a sense of what they feel should be tested on priority and not missed upon. This gives a streamlined approach to testing.

Other times, exploratory testing should be performed. Time box it and focus on a particular feature(s) of the application at a time.

3. Choice of automation framework :

It is generally advisable to use the same language/framework across the application for development and testing. This has the benefit of quick integration within the tech stack and that multiple people can contribute to it.

One of the factors to consider here is, whether the team has prior experience in this framework/language? If not, we need to take into account the learning curve in the overall delivery timeline.

4. Automation efforts :

A lot of times we face the dilemma of testing comprehensively vs buying “sufficient” time. Automation, if done right, helps with this.

One principle that I prefer is finding the easiest and quickest way to test a scenario. Eg: If a scenario can be automated at lower levels then make sure it is added there, rather than performing it manually.

Sometimes, even in agile environments, communication can be poor. QAs might not get to know of some additional changes made during story development / bug fix. In this case, automated tests aid in verifying those additional changes in time.

Image source : https://opensource.com/article/18/11/continuous-testing-wrong

I will focus on the topmost layer of the test pyramid which is generally contributed to by QAs actively. This layer is the thinnest of them all and generally GUI based. We need to validate the end to end system here, detect major problems or gross regressions. This should be done before a build is picked up for extensive testing.

Now, we need to know what to automate and what to test manually. At this layer, automate tasks which are closer to user behaviour. Interact with the application, covering multiple application features. Thus increasing bug finding potential.

Automating tests has different parameters that affect its cost. We need to make sure that the cost of writing , maintaining & execution time is as low as possible. Factors that help with this are :

  • writing self documented and clean code.
  • writing test skeleton while the feature is in development.
  • using APIs instead of UI for prerequisites wherever possible. This will additionally help in faster test execution and be less error prone.

Bug finding potential and lifespan of automated tests should be good. This means we need to think before choosing a scenario to automate — “Will this scenario survive changes in the application?”. Unless the application is stable enough in terms of business requirements and code, it is unwise to automate a given scenario. Fundamental idea is that the tests should be good at finding bugs in its lifetime.

The more these tests run, the more valuable these tests will be. Run them on each commit (in your pipeline) and you will know when that commit breaks something.

Sometimes, a use case might have a lot of variations. Is it worth it, to automate all of them? I prefer not. It is best to avoid making GUI tests bulky. As always, target quick feedback. We need to choose which one to automate — a critical one.

5. Test data :

Test data creation is one of the time consuming exercises which tends to be repetitive during the project life cycle. It is imperative that we account for it and look for ways to optimise it. When we need a large data set for testing, it’d be ideal to automate test data generation. One can write automated scripts for generating random data, hit the API(s) in a loop with different payloads, or use excel sheets with formulae. How large the dataset would be, varies per project and so is their purging requirement.
Sometimes, once the product goes live we encounter different data points. As a QA we need to keep an eye out for them and incorporate in future testing cycles.

Similar thought process (for test data creation , choosing automation framework and automation efforts) must be applied to performance and security testing.

6. Code reviews :

As a QA, I prefer to have time boxed code reviews, not just for the automated tests that we write, but also for developers’ code. This practice has following advantages :

  • Developers and QAs get to discuss what has changed , has been tested and what is expected of the QA cycle.
  • QAs as well as Devs can spot bugs in the code / test coverage, thus reducing feedback cycle.

What if code review is not feasible for certain reason(s)?

In such cases I’d recommend that QAs themselves check PRs and try to understand the basics of the changes and take their work forward. This will be easily achieved if clean code practices are followed.

7. Conflict removal :

A couple of items that are critical to timely delivery of product are issues and their prioritisation. These generally can lead to conflict when :

production issues are discovered :

The entire team needs to make sure that there’s an IRP (Incident Response Plan) available. We need to make sure that the root cause is identified and fixed in time. Immediate aim is to reduce business loss. Retrospect later and work on bettering the processes. Effectively, we rid the team of blames / finger pointing.

issue needs re-prioritisation :

A lot of times we as QAs need to help the team realise that a particular issue needs to be fixed on higher priority. In such cases it is better to come up with facts and data to corroborate our opinion. This helps us reach a final agreement collectively and efficiently.

--

--

Karishma
Technogise

QA Architect | Ops practitioner | System Design enthusiast