Five-Fold Testing System — #1: Testers

Yusuf Misdaq
DealerOn Dev
Published in
7 min readOct 26, 2018

In their seminal work on software testing, Kaner, Bach & Pettichord came up with the concept of a Five-Fold Testing System. It’s often so tempting (both in writing about software, and even in the actual practice of software development) to be constantly innovating and re-creating ideas from scratch. Some onlookers might more bluntly categorize this as a needless re-invention of the wheel. Given my general agreement with the system they laid out (alongside unashamed admiration for Kaner, Bach et al.) the aim of this article is not to re-invent, but rather analyze that system in light of the day-to-day challenges our QA team faces at DealerOn, hopefully shedding some light on our organization in the process.

To briefly summarize, the system involves the following five considerations (I really prefer to think of them as considerations, rather than typical ‘phases,’ although it should be noted that you really need to be considering all five).

1. Testers
2. Coverage
3. Potential Problems
4. Activities
5. Evaluation

This article will focus on the first consideration...

1. Testers

The ‘who,’ as in, “who is testing?”

Tim E. drawing it out for the team

When it comes to who does the testing, any self-respecting agilista will immediately blurt out, “everyone!” — and yet how many places have I seen where, even though agile was officially being practiced, an overall culture of lethargy stopped teams from achieving those lofty goals. Ironically, one of the DealerOn Dev-teams I spend a lot of my time on does not actually run an agile process, and yet I think we totally live that ideal. We still operate in many ways with a startup mentality, and because of that, this ubiquitous mindset of testing is even more pronounced at DealerOn than at other larger companies.

1. An Idea

Let’s take the example of a Product Owner coming up with an innovative idea. Perhaps inspiration is the very first thunder strike. While said product owner is out on her lunch break, she notices the way someone parks their car, and then, almost out of nowhere, a great idea suddenly jumps into her head, coming into focus in a few glorious moments. It seems exciting. Why? Because it seems original. Anything original has the potential to distinguish the product, which in turn, has the potential to add to the product that thing which good software is really all about, value.

Because of the mere existence of this potential, and because, practically speaking, anything of value would necessarily require work (i.e. time, resources), and of course, perhaps also because anyone pitching an idea has pride in their work, the product owner will immediately begin a process of testing, right there and then. This, just like the initial idea, is completely invisible. It is so closely tied to the original inspiration for the idea that the boundaries between where one ends and the other begins may even seem blurry. Yet it exists. She is instantly posing herself questions like, “has this been done before?” that question may be answerable by querying her own brain more deeply, or, she may need to open up her laptop and do some research over lunch. All of this is testing.

2. Make the Ticket

Let’s say the idea works, and she writes up a JIRA ticket as a ‘Feature Request.’

3. Pre-Dev Review

That ticket (like all of the new tickets) comes first to me (lead QA engineer) or someone in the QA team, as part of our daily bug/ticket triage. This happens in the morning, pretty much before anything else. We frequently invite customer support members to be part of this meeting for training purposes, just to make sure that the company remains well-balanced (and to maintain the amazingly friendly atmosphere/culture we have). When assessing the tickets in this meeting, I am not making any ‘will we, won’t we’ decisions on the ticket, but rather, simply testing the expression of the idea. The criteria for passing it to the next phase is essentially, ‘is this a well-expressed ticket (that isn’t obviously invalid)?’

4. Dev-Review

If the idea was described accurately, and I saw no obvious holes or questions, then I pass it on to the next meeting, Development Review, which happens 3 times a week and involves the Sr. Technical Lead, Dev. Team Lead and VP of Ops. They essentially test the ticket’s merits against their knowledge of our platform, the feasibility of coding it, the potential for code that could be re-used, the priority and size of the ticket, which developer might work on that ticket; or perhaps even the insistence that this just isn’t feasible (in which case it may be pushed back, backlogged, linked to another future project, or outright rejected).

Sunny hard at work coding a new UI

5. Ready For Work / Work In Progress

Once it goes into the next phase, the developers work-queue (i.e. “Ready for Work”), our amazing Developers (working in pairs) will create it, then run existing (and write new) unit tests against it, as well as (depending on their experience with the product) some pretty sound business logic checks and tests (all of which are documented in a ‘Dev’s test cases’ area in the JIRA ticket — which as you can imagine, is a blessing for QA/testers. When we know what has been tested before, we can not only avoid repetition, but also get an idea of potential areas of concern that you may not have been aware of before).

6. Ready for Review

When the developer is happy with their work, they will pass this to the next phase, “Ready for Review”. Despite the title, this phase is not the QA testing phase, but rather another set of testing, namely peer code review, where the ‘partner’ Developer undertakes a thorough and involved process of inspecting the code their colleague just wrote.

Some ‘blind-spot heuristics’ that also serve as casual reminders!

7. QA / Testing phase

If it passes this, that’s when it officially enters our QA phase, either myself or one of the QA team will pick up the ticket (sometimes the team is aware of which tickets they’ll get in advance, if I have earmarked one for a particular tester for the purposes of either training or getting a ticket out faster, however at other times it’s a buffet). We run a build of the code, deploy it to the QA environment, and begin our UI functional testing! [NOTE: there’ll be much more on the numerous techniques and styles the QA team employs in a later article in this series].

8. Ready For Release / Staging / Confirmation

If after all our UAT / UI & Db inspections, we feel confident that it’s good, then it will be queued up for a deployment to production (“Ready for Release”). This is with the proviso that that one of our partners doesn’t need to see it and have their own testing phase; if that is the case (let’s say Toyota or Ford want to have their QA teams check our work before it goes live) then we’ll place it on a dedicated staging server for them and give them the time they need to feel confident and assured).

Just as I saw these tickets during the initial triage way back, so too do I, the product owners, and often the original ticket reporters (often members of the Customer Support team) - take a final look/test once it is live. The ticket is not closed until the original ticket’s reporter confirms it, and as you can see that responsibility can fall on pretty much anyone in the company.

In such a highly condensed summary (where we have basically gone through the happy path) there’s so much we have not even touched upon. The testing work that Creative Ops (think UX) and Designers do on their own work before sending it forwards, and of course with new products, there is even the potential to have ‘test-actors’ step in as user testers, beta-testers etc. All in all, that’s a minimum of 12 testing eyes on a product during its lifetime.

A Takeaway…

If you’re sufficiently impressed by all this then, for me, there is a more emotive takeaway I’d like to offer at this point: The amount you test something is an honest indicator of how much you care about it. Since no worker is perfect (we all have off-days, and so sometimes even when a worker tests their work, they are going to miss something) it is important that companies consent to putting testing processes in place, and taking them seriously, abiding by them.

When doing this, a company is essentially saying, first to themselves, and also implicitly their stake-holders and customers, “we really care about this.” On the contrary, an emerging trend at a few of the huge software companies (I don’t need to mention any names) — is to rush and deploy everything at breakneck speed and then leave(rely on) the user to encounter (and report) bugs in production, which they will fix later on. I find this approach not only sloppy and offensive to my sensibilities as user, but also, somewhat disconnected from reality, borderline arrogant, and overall, just a less mature, less excellence-oriented way to do things.

I liken meticulous testing processes and habits to the example of a man who is putting together an elaborately staged wedding proposal. Before that day, he runs over his plan numerous times, talks to all others involved in the plan, takes great care to keep the whole thing secret, perhaps even walks through it physically, both alone and with others, and generally thinks about all of the potential variables he can imagine possibly going wrong.

Why would he do this? Because he cares.

…Check back again where I will cover the remaining 4 considerations of the five-fold testing system from DealerOn’s perspective.

--

--