Software Testing: Methods and Paradigms, and Pyramids, Oh My!

Seamus Kearney
SEEK blog
Published in
4 min readMar 5, 2021

Why ‘software testing’ is as much a human problem as it is a technology one.

Unless something changes that makes A.I. capable of providing 100% code coverage on your behalf, you’re going to need to know how to approach testing critically. The topic of smoke tests came up within my team on Slack recently. Sixty-eight comments later, we decided to move to Zoom and finally scheduled a discussion during our tech-huddle (a 60 minute informal time to share ideas and thoughts). So you can imagine that the topic of testing tends to elicit many opinions — forming your own is a good mentality to have — so take mine with a pinch of salt, to taste.

Ambiguities of ‘Systems’

What constitutes a ‘system’ is, by definition, ambiguous. Because of this, communicating testing methods via conventional terminology has proven to be less clarifying than I expected. During our discussions it became apparent that the shift towards micro-services has clouded our shared understanding of these terms. While some think of smoke testing as UI driven processes prone to failure, others apply a diet-integration model — invoking dependencies only so far as to validate that they can. That said, we all agree that we should endeavor to write tests that incur the lowest cost and provide the fastest feedback.

after code reviews, smoke testing is the most cost-effective method for identifying and fixing defects in software — Microsoft 2019

We’re taught that tests increase in cost, complexity, and maintainability as you go up through the testing pyramid. There are many ways to test your code and so you should always evaluate how your test is providing value, and at what cost. For example, I’ve found that ‘smoke tests’ seem to be frowned upon or often dismissed as expensive— cost being as much a maintenance concern as a performance one. However, when used on critical APIs/ingress points, they provide a necessary layer of confidence in our systems.

A useful testing pyramid by DevOpsGroup | Creative Commons Attribution-NoDerivatives 4.0 International License
A useful pyramid — DevOpsGroup (CC4.0)

Until recently, I had been roadblocked by my own flawed understanding — that testing has clearly defined boundaries which guide us on what technique to use and how to best apply it. However, when asking for advice on smoke testing, each person I asked gave me a wildly different answer or had visible disdain for them. This is a stark comparison to a statement from Microsoft claiming that, “after code reviews, smoke testing is the most cost-effective method for identifying and fixing defects in software”. It’s worth mentioning that Azure categorises smoke tests as being lower cost than integration tests, despite many diagrams contradicting that sentiment.

So what has this shown me? That even for the seasoned among us, there is ambiguity in what and how we test our applications. As a general rule, we should approach the majority of our testing— like most software problems — with a cost:value proposition. We should strive to land at a healthy balance between what costs we can and can’t live with, while deepening our own understanding of the ‘system’, to improve that balance.

Defining Boundaries

If you agree with the sentiment of the previous section, then the best advice I can give you is to have your own conversations with your team; Work to define the boundaries of your systems and what level and style of testing fits for each. Once you have this shared understanding, the usual questions — “what are we trying to protect against?, what are we trying to validate?” — become much easier to translate into valuable test suites and, hopefully, consistent across the services you manage.

Use agreed boundaries to elicit implementation; Endeavor to test only what you need to in order to get the desired validation; don’t invoke dependencies other than to confirm that you can. The state of your domain can change rapidly and ownership of services with it.

Provide meaningful (see: valuable) validation; Feedback from tests should be timely, informative, and disseminating them should cause as little friction as possible. Think about how your implementation may block the next person from making changes that your test shouldn’t care about. Be critical of tests and the value becomes apparent.

Get consent, not consensus; Not everyone in your team has to be in agreement, but they should be able to accept the compromise you’re offering with your testing patterns, given the boundaries identified earlier. This is where a prior, team wide conversation, can provide the most value, team cohesion and consistency are fantastic attributes to have.

Challenging the Status Quo

Systems have evolved, so too should our testing methods. I’d recommend questioning how you test and get your team’s opinion. Use that as the basis to construct your own tests. Also, team conversations are great but you’ll often get much more varied input in a one-to-one setting.

Perhaps it’s time to evolve our understanding of testing paradigms and redefine the nomenclature… are our ways of testing keeping up with the models of our applications? This is something I hope to dig deeper into in my own career as I continue to explore more diverse technology stacks and the teams that build them.

So, I will leave you with something to ponder..

If you had never seen the ‘testing pyramid’, how would you describe the layers of testing micro-services used today?

--

--

Seamus Kearney
SEEK blog

Software Engineer @ SEEK.com.au. He/They. Eyebrow Whisperer. #GameDev dreamer.