Addressing the absence of testing techniques

Michael Glennon
AppLearn Engineering
4 min readJan 14, 2022

Over the previous decade within the software development industry, we have seen a mass adoption of exploratory testing. And with the wider move to scrum/agile methodologies, it’s not hard to see why.

Exploratory testing is a great testing technique. It also lends itself very well to agile software development. It’s an efficient testing process to find defects in a timeboxed period with limited test planning and documentation compared to its predecessors.

Software doesn’t always have the same respect for testing techniques…

Prior to the agile world of exploratory testing, the QA industry had a structured level of planning, scripting and documentation. This was very prescriptive and didn’t leave much for the creative side of “how can I break this” or “there is something missing”.

However well-documented the drawbacks of the old world were, from my experiences we are wrong to have left some of the old techniques behind.

A solid basis for any form of testing, whether that be exploratory or scripting, is the use of testing techniques. I’ve found the lack of knowledge of testing techniques quite surprising since the advent of scrum methodologies.

During this time I have interviewed over 50 QA engineer candidates, a pretty good sample, and I seem to find a technique-shaped hole more times than not. Whenever I dig into the “tell me which testing techniques you apply in Test Execution phase”, more often than not I am met with confusion. “Can you ask me that question again?” “Do you mean regression testing?”.

The reality is, most of the QA candidates indirectly use these techniques but struggle to understand what they and their benefits are. This is an oversight and can lead to extended testing cycles.

As QAs, we seem to have forgot what techniques we have in our arsenal to aid with exploratory, regression testing. These techniques are often frowned at, as “…they are from the ISEB foundation which is outdated”. But these techniques should be adopted daily within the exploratory testing space and even more so in regression testing cycles which is the most expensive testing phase.

The absence of testing techniques such as equivalence partitioning, boundary value analysis and decision tables is quite concerning. Of course, exploratory testing itself is a testing technique, but it needs to use other complementary techniques to ensure exploration doesn’t turn into ‘ad-hoc testing’.

As for the reasons to apply testing techniques, they unlock benefits like:

  • Providing triggers for negative testing which is often overlooked
  • Making sure software partitions are considered
  • Segmenting a feature into smaller sub features
  • Pushing more application errors to occur at the boundaries of the input domain
  • Ensuring equivalence partitioning uses the fewest test cases to cover the maximum requirements

I’m not going to explain each of these different techniques in detail. However, I will take one technique to illustrate the benefit: Decision tables.

Decision tables are very effective for mapping out test cases for complex features with many variables. Exploratory testing would not give a comparable level of coverage or easily work out the permutations.

They are a simple technique to map out the amount of test cases required and expected outcomes. By highlighting gaps in the Acceptance Criteria of stories, they firm up feature requirements . They are quick to create once variables are identified and mean are less likely to miss a key component or variable.

The new world of testing we live in seems a little too keen to link exploration with no documentation or planning. But exploratory testing should always have a level of planning. There should be a scope. There should be objectives. There should be techniques. However small the timebox.

I find large testing cycles very avoidable with use of these testing techniques mentioned above tools on top of automation. These techniques help reduce the cost of the testing by reducing the test cases and, more importantly, not running invalid tests which will provide the same result of previously run tests. For me, a duplicate test is a failed test, or resources which could have been better used elsewhere.

So what for the next decade of testing? The main drivers going forward will be exploratory testing and automation and with a much wider availability of automation tooling and ease of use, this will be the main focus.

However as detailed above, without a plan on understanding the test basis and scope using the techniques above, you will effectively be making the same mistakes but in an automated test rather than a manual one.

With this in mind, the next decade needs to be all about balancing the new with the tried and tested.

--

--