Phased vs Threaded Testing

Aaron Hodder
7 min readMar 10, 2018

--

There have been many attempts to model different approaches to software testing. You’ll be familiar with many of them: exploratory vs scripted; traditional vs agile; testing vs checking; standards-driven vs context-driven; and many more.

There’s another dichotomy I’d like to introduce that I’ve derived value from using:

Phased testing vs. threaded testing.

As the name implies, it describes the testing lifecycle as either a series of phases or as intertwining threads.

I’ve found this useful in encouraging debate or conversation about testing practice in a way that is agnostic when it comes to SDLC and testing politics, and therefore is a model that can encourage useful introspection without too much baggage.

Phased Testing

Testing has traditionally been modelled and organised as a series of stages that occur more or less sequentially, symptomatic of the waterfall and enterprise contexts in which this tradition was established. When we talk of “traditional” testing, this is usually what is meant, and it often looks like this:

1. Requirements-gathering phase

  • Read requirements documents
  • Collect useful artefacts
  • Establish acceptance criteria

2. Scripting Phase

  • Codify test ideas into artefacts such as ‘Test Cases’
  • Codify test ideas into artefacts such as automated test scripts
  • Codify test “coverage” into a traceability matrix linking your scripting phase to your requirements gathering phase

3. Execution Phase

  • Testers perform testing as described in their test cases
  • Machines execute automated test scripts

4. Exit / Reporting Phase

  • Test exit reports are written
  • Political arguments are had about the coverage and the information found

This sounds fair enough, and the IEEE-829 standard for software test documentation defines a whole document structure to support this model of testing. This phased approach to testing closely mirrors the phase-based waterfall model of software development, but can also be used within agile models of software development as well.

For example, I’ve seen situations where “agile testing” has been defined as little more than writing automated acceptance tests for the acceptance criteria attached to a story; the only difference between this and “traditional” testing being that a machine is executing the test scripts, and not a human. Granted, in agile environments, a tester typically has the opportunity to add value above and beyond the actual ‘testing’ of the product, and the lifecycle may be hours or days, rather than weeks or months, but when it comes to the testing activities per se, you can describe this more accurately as a linear series of phases with little to no iteration. You may argue that’s not ‘really agile testing’, and I’d agree with you, so it’s useful to have a label to describe this pattern independently of the wider SDLC it occurs in.

Therefore, I would like to call this model of software testing the ‘phased’ approach, as it views testing as a series of phases that, for the most part, progresses linearly from one phase to the next. Having to go back to a previous phase is seen as some kind of regression, as some kind of failure to adequately plan, control or execute.

Phased Testing reframed

It strikes me that if we look at the actual intent or purpose of the activities that signify these phases, that we can actually reframe these phases as follows:

  • Requirements-gathering phase → Learning Phase

Learning Phase

Why do we gather and read requirements documents, specifications? To learn. This is the phase where we are learning about the project, the problem it is trying to solve, and what has been designed (so far). We collect and read any and all accumulated artefacts, and talk to relevant stakeholders to gain an understanding of what’s going on, and how we are going to test. This phase is not a success simply because we have read documents and accumulated artefacts, it is successful if and only if we have developed sufficient understanding of the project and our contexts to begin productively engaging in testing work.

  • Planning Phase → Modelling Phase

Modelling Phase

A model is, essentially, something that represents something else in some way. We use a model to help us to understand or manipulate the thing the model represents.

There are many kinds of models. User stories, architecture diagrams, even source code, are all models for the actual software that will be compiled into the machine code that becomes what the user will ultimately interact with.

Test cases are models. They are a model for how a tester or machine will interact with the software and what they will look out for. Therefore, I would like to reframe the “Planning” phase to the more general “modelling” phase, because the success of this phase is not that x number of test cases have been created, but that the tester has developed useful models to facilitate interaction with and evaluation of the product.

  • Execution Phase → Evaluating Phase

Evaluating Phase

Test Cases are not testing. Testing is not an artifact; it is not something that can be ‘written’. Testing is an event; an activity; a performance. Testing is the evaluation of a product by learning about it through experimentation. Part of the experimentation process and a potential outcome for our learning may be the execution of confirmatory test cases and scripts, but at some point, a performance has taken place.

The success of this phase is not simply that the pre-defined scripts have been fully executed, it is that the product has been interrogated and evaluated to a satisfactory degree.

  • Exit / Reporting Phase → Feedback Phase

Feedback Phase

This is the point where the reports are written, and defects are deferred. Since this is the phase where we produce the ‘test exit reports’ and the pass/fail results, and we receive feedback on these results, let’s call this the reporting/feedback phase. As a phase, it is often the case that testing occurs in a black hole until this final phase, at which point communication is (reluctantly) engaged in.

With this in mind, let’s look at these phases again.

The apparent absurdity

When we reframe these phases as Learning, Modelling, Evaluating, and Communicating, it starts to appear a little absurd. The idea that we constrain our learning to a phase, and once we’re done learning, we then move onto codifying that learning into test cases isn’t a process that fosters continuous learning and innovation.

Test cases are opaque. Hard to understand. Hard to change.

Which means that even if we do decide to iterate our models after performing some hands-on testing, it is only with great difficulty. And because going ‘backwards’ through the phases is so expensive and painful, we invent structures to help us mitigate change. Structures such as ‘entry criteria’ and ‘exit criteria’ and ‘stage gates’ and ‘sign-offs’.

The phased approach to testing ignores one simple fact: Testing is all about change.

We change one idea about how something works for another. We learn more about what people want when we build models, and solicit feedback. We then change our models to incorporate that feedback. Designs change when we learn new things about the problem we want to solve, or the limitations of our proposed solution.

Let’s look to an alternative model of testing. One that embraces and acknowledges change.

Threaded Testing

As an alternative to viewing testing as a series of discrete phases that only move forward (or back only with great pain and difficulty), we can instead consider each of these phases as threads that intertwine and influence one another throughout the entire testing lifecycle.

There is also an element of strategy which, like a cable jacket around wires, guides and directs how those activities occur.

Therefore, we can now talk about testing as being ‘phased’ or ‘threaded’ in accordance to the degree that the different threads influence one another, and how iterative the approach is.

In this model, change is embraced, and we let new information influence our behaviour. We let our models adapt and evolve, and we perform testing to learn, and to get feedback on what we’ve learnt. Of course, with a model of testing so malleable, we need structures and techniques that lend themselves to rapid adaptation, which is why you’ll find me often advocating for visual test management tools such as mindmapping software, and approaches to test performance closer to the exploratory end of the spectrum.

Some key skills and activities I see belonging in each category:

Learning

  • Reading and analysing documentation.
  • Talking to people. Asking good questions
  • Touring the product
  • etc

Modelling

Creating good models is a key activity in the threads of testing. There are loads of different kinds of models we can use, such as:

  • State Transition models
  • Flow Models
  • Equivalence Classes
  • Combinatorial models
  • Product Coverage Outlines
  • Test procedures
  • Note taking
  • Charters
  • Factorising
  • etc

Evaluating

Feedback

All these activities happen throughout testing. A tester may move between these threads unconsciously, invisibly, and rapidly. Or they may spend a lot of time performing activities that belong to one thread only.

In the past, I’ve talked about ‘lean testing’, and I now consider a threaded testing approach to be a key tactic in achieving a lean testing approach, almost to the point of being synonymous.

The intention to describe testing as either phased or threaded is to be agnostic when it comes to SDLC and testing politics — testing can be threaded or phased in both agile and waterfall environments — and to encourage useful discussion without too much of the baggage sometimes associated with other terms. I have used it when I’ve wanted to talk about changing testing paradigms without getting derailed or sidetracked by other terms which can be politically loaded in the environments I sometimes operate in.

--

--

Aaron Hodder

Service Lead at Assurity Consulting with focus on Lean testing | Co-Founder @WeTestNZ | http://Inclusive-Collaboration.org contributor. Neurodiversity advocate