Additive Testing: An Approach to Software Quality

Jay Newlin
Hunter Strategy
Published in
7 min readJul 18, 2023
Two people in black shirts are working at a desk, one standing and peering over the shoulder of the other, who is seated on a chair. In the background, some sticky notes are posted on a glass wall.
Collaboration — particularly between developers and testers — is one of the most important tools to improving software quality. (Photo by Maxim Tolchinsky on Unsplash)

In my last blog post, I discussed my favorite definition of “quality.” In it, I referred to Quality Assurance testers and some types of tests. This time around, I would like to think together with you about how the various types of tests can work together to help ensure high-quality software from your team. I usually refer to this concept as “additive testing.”

In most software development teams or departments, we make some assumptions about who is responsible for “testing.” We generally assume that testing is done by the quality assurance (QA) folks. Period. I think that’s a particularly narrow view of testing.

Let me quickly dispel one notion: While I know what folks intend with the statement that “quality is everyone’s responsibility,” I think that it’s not particularly effective unless we all agree on how to achieve quality in software. Decades ago (in the 1980s and 1990s), Ford Motor Company used the advertising slogan “Quality is Job 1.” It sounded great. I suspect people bought cars, trucks, and vans from Ford because they believed that they were high-quality vehicles. But — both then and now — I wonder(ed) what that really means. I’m not a fan of slogans unless we also talk about how we achieve what the slogan claims to be true.

The Software Testing Pyramid

Many of us have heard of Mike Cohn’s “test pyramid” which he described in his 2009 book, Succeeding with Agile. In case the pyramid is new to you, here’s a version that I borrowed from the “What is Agile Testing” article on Vskills.in.

The Test Pyramid, with a wide base of Unit Tests, a middle layer of Service Tests, and a small pinnacle of UI Tests. An arrow on the left indicates that there is more isolation of the tests at the bottom and more integration at the top. A similar arrow on the right indicates that Unit Tests are faster and UI Tests are slower.
The Test Pyramid

To summarize: The pyramid shows us that we need to ground our software testing with a lot of unit tests (which run quickly but are isolated from the rest of the system). The next layer up is “service tests” (which check how components work with each other; they aren’t as fast but they still run in minutes rather than hours). At the very top are UI tests (which run very slowly and are expecting the “whole” system to be available to the testing framework). My concept of “additive testing” is that, by creating good unit and service-layer tests, we need fewer UI-layer tests. This “layered” approach to testing increases our confidence in each aspect of the system as we move up the pyramid.

Jay’s Principles of Additive Testing

First: Let’s cooperate on robust unit tests

For most Agile shops, we all agree with Martin Fowler and other parents of the Agile movement: Unit tests are a must — especially when you want to refactor your code. I encourage software developers to create unit tests alongside their code (also known as “test-driven development”).

What I also encourage software development teams to do is to work with their QA counterparts to ensure that their unit tests are as robust as possible. I like to see developers and testers pairing — or at least to see testers reviewing unit tests during Code Review — so that the testers can make suggestions about improving or increasing the unit test coverage.

Why work together like this? Let’s take the example of an email address validator (that is, a method that is checking whether a user has input a valid email address). Good developers will check to make sure that there are some characters before an @ sign, some after, a dot, and two or more characters after the dot. Testers will want to try weird characters (from non-Latin alphabets or emoji) and will probably try to put numbers in the top-level domain (TLD — the stuff that comes after the dot). Really ambitious testers will try to put a zillion (or zero) characters in the various sections of the address. If the tester and the developer work together, the validator method’s unit tests can check these quickly and easily — and the tester won’t have to manually check dozens of iterations of invalid email address formats to ensure that the validator is working correctly. Good, fast unit tests — which the tester has reviewed and knows what they include — save a lot of testing time later!

Someone from the QA team should also be working with tech leadership to ensure that coverage statistics are measured — and always improving. No, I don’t believe in the mythical “100% lines of code (LOC)” or even 100% coverage of most statistics. What I tend to look for is high coverage (80% or better) for LOC, 95% or higher in “critical units” (the team needs to agree what this means and which units those are), and 90% or higher branch coverage. When the team is really working together well, we can set a quality gate that prevents code from being merged into the project if test coverage decreases.

Next: Let’s be honest about needing more system-level tests

The middle layer of the pyramid (System Tests) is where I suspect we all need to do some work. Far too often, teams include developers who create excellent unit tests, QA folks who create a lot of automated tests at the UI layer and regularly test the application manually, and everyone agrees (even if they don’t say it out loud) that “UI testing is our integration testing.” We do ourselves a disservice if we leave the middle layer (services, APIs, etc.) untested/under-tested.

We all know the scenario: We run our very slow UI tests overnight, and a significant portion of them fail with a 500 error. We’ve lost a lot of testing time, then we dig in to find the cause, only to learn that we really don’t know exactly what happened, so we just restart the server or service and hope that it won’t happen again tonight. Our system might as well have sent the 418 “I’m a teapot” error.

Service layer testing (or true integration tests, in my opinion) is another area for good cooperation. Architects, tech leads, and automation test engineers should all work together to identify the tooling that will help test the interactions between their components, microservices, etc. They should identify a tool that fits the tech stack to check that messages on the APIs are being sent, received, and processed as designed. The front end should include tooling that allows testing more complex components, to ensure that they are behaving and rendering information correctly.

For those who are developing microservices, I highly recommend that you consider Contract Testing. Many test managers and companies (for example, Lewis Prescott at Cera Care, report reduced downtime and reduced service-layer or API errors when they have implemented contract testing.

Last: Let’s try to rely less on UI-layer testing

Over the years, we have all become complacent: QA testers and our tools are really good at finding bugs, so many companies / development teams rely on QA to catch a lot of (or worse, all) the bugs. The biggest problem with this approach is that testing at the UI layer is painfully slow — even with the best automation. And, as I mentioned above, the UI layer doesn’t “know” a lot about the system beneath it, so many of the errors that UI-layer testing uncovers require a lot of investigation and debugging because there isn’t enough information to point out exactly what caused the error.

I don’t want you to think that I’m hinting that we could abandon UI-layer testing completely. We absolutely must make sure that the user interface is working and looks as expected. Good QA teams use a combination of automation and manual testing to ensure that it is. I insist on both (manual and automated testing) because only a human can determine if the product looks and feels as a user would expect (again, refer to my last blog post for more thoughts about this).

In Agile teams, UI tests should not lag behind feature development by much more than one Sprint. Ideally, they should be created in parallel with feature development. If you’re not there, you already know that your testers are spending a good bit of time on manual testing to ensure that critical areas of the system are ready for release to production.

To sum it all up

We need our UI-layer tests to be as small a suite as possible. As I mentioned above, they are slow and quite often “behind” the development cycle by a day or more (realistically: a sprint or more). The solution is easy to describe but time-consuming to implement: Build more testing at the lower layers (unit and service) that your QA team knows about. That way, they can create and execute only the UI-layer tests (automated and manual) that are absolutely necessary, instead of covering every edge case imaginable simply because no one knows if or where something is being tested.

Good luck on your effort to continue improving the quality (and the testing — at all layers) of your software!

Contact Us

contact@hunterstrategy.net

Our Website

--

--

Jay Newlin
Hunter Strategy

Director of Quality Assurance for Hunter Strategy (hunterstrategy.net). I think and post a lot about software quality, policies, processes, and management.