Five-Fold Testing System — #2: Coverage

Coverage — what is it?

Yusuf Misdaq
DealerOn Dev
8 min readNov 9, 2018

--

Our DealerOn Dev-QA team recently expanded (great!), so while the experience of recruitment is still fresh in my mind , I’ll come at ‘coverage’ from a slightly different angle. I’ve always found it interesting to note that on most Software Tester CV’s, it still seems in vogue to make an exhaustive list of the types of testing they are proficient in. They will typically go something like this:

“I can do black box testing, functional testing, smoke testing, regression testing, white box testing, stress testing, UX testing, etc. etc.”

I think this is all a bit redundant to put on a CV, simply because the names (which point to real, and subtle processes) can be discussed best in the context of in-depth examples of testing scenarios. When they are simply listed — out of context — the risk becomes that the label (and the name-dropping) supersedes the actual process that is being named (something all too common in software, unfortunately). The list does, however, remain useful for us when discussing testing coverage. Because on a high level this list simply tells us that there are numerous testing types, that is to say, different ways of approaching the testing of a product/application. If I am doing stress testing, for example, I am doing something essentially distinct from functional testing.

So when we talk about “Coverage,” in testing, it is really about how many of these areas have been taken into consideration by the testing team. In other words, thinking about test coverage is thinking about:

1. How many different testing angles can you look at a piece of software from?

2. How flexible (smart) is your approach re. the appropriate utilization of these various testing angles?

As for the action of actually testing, let’s just say, as surely as you can cover a piece of toast with various different spreads, jams (and exponentially increasing nut butters) so can you also cover a web product with a large array of different testing styles.

A sea of random people…

How many angles?

If all of this sounds abstract, that’s because it (and testing) often is. That list of testing-styles on the CV could really go on forever (almost like the nut butters…) — because there are endless ways to think about and describe testing processes. As testing-mastermind Michael Bolton said in his blog DevelopSense,

…testing is about exploration, discovery, investigation, and learning. Those things aren’t calculable except in the most general way, and work tends to expand or shrink to fit the time available for it.

It’s because of that vast reality that considerations of coverage imply (even demand) definition, sculpting, rules, limits, scope. Without them, we’re essentially a sea of random people, having an endless conversation of inaudible whispers and mumbles, in a pitch-black room.

When we place defined scope on coverage, we begin to turn the testing space from an infinite expanse to a reasonably explorable area. High level, this can be as simple as agreeing upon the entry and exit-criteria of the testing (“testing should only begin once they land on the form page, and testing ends once they successfully receive that confirmation e-mail,” for example). Scope can also be as complex as a whole research phase of a Sprint, a study if you will, mind-mapping an individual product, creating a breakdown of its users, surveying their potential expectations (expectations that may be based on: i- our past products; ii- industry standards for similar products, and weighed against iii- where we might want to take/guide them in the future, given what we know is coming down the pipeline…).

Even the simplifications, then, can risk giving way to complexity, which as far as we’re concerned, is one of the biggest drivers of buggy software. For this reason, our testing team is always on alert to avoid potential complexity at all stages of product planning and test planning. Ultimately, there’s nothing that better dictates the scope of your coverage like context, or as I also call it, common sense.

“It’s the commonest sense!” — Or is it?

Our way

At DealerOn, our clients don’t care what percentage of coverage we have around any particular product. They don’t care if, in our testing phase, we focused 50% of efforts on UI-coverage and 50% on code coverage (cough-cough, developer unit testing is code coverage!). What our clients do ultimately care about is what doesn’t work, what doesn’t look right, or feel right. This stark reality, coupled with the high volume of bug fixes and improvements that our amazing dev-team churns out (alongside the fundamental point: the incredible complexity of our ever-evolving CMS product) means that we are forced to take a somewhat simplified and common-sense approach to a lot of the tickets that we look at. That is to say,
i) Happy path ii) Edge cases iii) Regression.

Happy path. Does it do what it says on the tin? There are a few different ways to do the same thing, so this is never as simple as it seems. Take the case of a simple input field as an example; some people want to type directly, others may be copying and pasting, while a select few might be trying to use hotkeys. Then there are configuration issues: sure it works great on this device, what about that device? (We’re blessed to do our cross-browser testing with both real devices in-house, and Browserstack). Not to mention internal software configuration variables, i.e. It works for this type of user, how about that type of user? And, is there a third, or fourth type of exceptional user that exists in some murky grey-area where few men fear to tread?

A great notion I once heard (can’t remember the source, forgive me) — was the idea of ‘follow-on testing’ (and there’s another new testing-type for you!) The idea is: my happy path test passed, I did what I needed to do, but now what? Sometimes the answer is obvious: nothing, your test is over, go home. But other times, particularly when testing a new product (or if one has a bit more time) there may be interesting behaviors to observe (potentially even buggy ones) that show up after we’ve accomplished our main goal. An example would be a ‘Success’ or ‘Thank You’ page that crashes the browser when you try to hit the Back button (yes, that’s happened to me before!)

Edge cases. Does it react gracefully to reasonable edge cases? Gracefully is key here. A person in real life is often judged to have acted “gracefully,” when they faced an unexpected barrage or difficult, unexpected tests. With software it is the same. We really don’t have time for totally insane edge cases (let’s not get into what I mean by ‘insane’) — but a simple example could be something like, what if we try to click on a page that this user doesn’t actually have access to? A simple error message/pop-up informing us that we don’t have the permission to access the page would be great, effective, and user friendly — that test would pass. On the other hand, you could just as easily end up with something like this…

“Oh that?? That’s fine!! No biggie!”

Regression. Has the change unduly affected how the page functioned before? We have a suite (ever-expanding, and a constant chore to maintain) of test cases that form our bedrock regression tests. When we change something on a specific page, we’ll run the standard regression tests for that page, or that area. Let’s say as an example that our product is a bathroom; we’d have a regression suite of tests that ensure the basic functionality for all parts of that room. These would be tests that, say, validate the toilet functionality, the shower, the walk-ability of the room, the light switch and so on. Now let’s suppose a new ticket comes in to put new carpet in the room. That could potentially affect every single other thing in the room, so we’ll probably make a decision to run all of the bathroom regression tests alongside our previous two (happy path and edge case) sets of tests, just to make sure this carpet doesn’t unduly mess with anything that’s already in place.

What specific regression tests we run for a given piece of functionality will often be based on i) the ticket in question, ii) a high-level appraisal of the associated Pull Request, and iii) a chat with the developer. Ideally only i) and ii) are necessary, but with the great relationship between DEV & QA, we can always walk a few steps over and chat if need be.

RISK & VISIBILITY

These are two more key heuristics that it’s helpful to consider when deciding on scope for an array of concerns (I often use them when deciding on testing coverage, priority of tickets in our testing queue, or even whether or not something is allowed to bypass QA altogether, which is a request we may get once in a while for seemingly trivial tickets.)

  1. Is there a high-risk associated with it? Each industry and business has its own mega-important things that are considered high-risk. One of the foundations of the Automotive IT-industry, for example, is the ability to capture leads (in DealerOn’s case, our lead forms). So, if our forms are being touched, changed, updated in any way, those are always going to be considered a high-risk category, and thus in need of additional attention.
  2. Is there high visibility on it? The thing being added or updated may be a non-functional thing like a image banner or a logo. It may not even be considered as that important to the business as something like a core functionality of the website. However the fact that this thing is also the first thing a customer sees on the landing page makes it high-visibility, and therefore potentially important to test.

SUM IT UP

Coverage is a bit like jam, peanut butter, almond butter or Philadelphia cream cheese. Think before you slather. Don’t waste resources. Context is everything. Creating a coverage plan (literally a word document or excel workbook) for different recurring scenarios could be a great idea, but remember that without adequate knowledge of the product, this is going to be a hollow pursuit. Your decisions with regard to coverage mean next to nothing is you do not know the product well, so learn, learn, learn!

--

--

DealerOn Dev
DealerOn Dev

Published in DealerOn Dev

Syndicated from https://dev.to/dealeron | On the DealerOn Dev Team, we strive to be the industry leader in code quality, innovation, and culture. The author’s views expressed in this publication are endorsed by DealerOn. The author’s views elsewhere may not be.