You probably need fewer testers than you think

ben kelly
Testjutsu
Published in
3 min readSep 20, 2018

I have noticed that companies have a tendency to conflate software quality problems with software testing problems. This sometimes leads to a knee-jerk reaction to conclude ‘we need more testers’ or ‘we need better automation’. The net result is hiring more software testers than you actually need. Software quality suffers for all sorts of reasons. Frequently it’s because two or more important, decision-making areas of the business aren’t communicating effectively, or are incentivised differently.

Here are a couple of classic examples:

You have a sales team (or even one individual sales person in the team) who makes promises to customers about products that don’t exist yet, without doing any tech due diligence. Promises are made, contracts are signed and the sales person has their bonus for hitting their targets. Meanwhile the development and infrastructure teams are left to deliver on said promises, often at breakneck speed and under pressure to have this vapourware work perfectly upon release. This can’t help but result in software quality issues — often ones that last well beyond the life of the project itself. This is a software quality problem. It is not a software testing problem.

Here’s another. Your platform team is incentivised to maintain stability. Their bonus is tied to system uptime. They’re pushing hard for that fourth or fifth nine. Your development teams are incentivised to innovate. What happens when a team wants to push out some experimental code? Well you can use canaries and do A/B testing, ramp ups and so on, but at the point where you’re impacting the platform team’s stats, then you have trouble. Of course, the dev team has a job to do, so they do what any creative, free-thinking, highly motivated team does — they create a workaround. That might be going off-piste with their own cluster of stuff somewhere, it might be forking that service they’re not allowed to mess with. The potential for that to have unintended consequences that impact software quality is significant. So again you have a software quality problem, but not a software testing problem.

Often these issues manifest as customer complaints, bugs in production, build issues, pipeline problems, intra- and/or inter-team dysfunction and so on. Rather than jumping to ‘hire all the testers!’, first take a step back and look holistically at how software development happens, taking into account platform and infrastructure, the incoming work pipeline, sales, technical due diligence and how various groups are incentivised.

At the point where a company does have a solid handle on these issues, and recognises the need for improved testing, it still doesn’t mean hiring hordes of testers. Programmers can and should be testing the code they write. It is incumbent upon them to make sure their code is as bulletproof as they can manage. This twitter thread articulates they why of that very neatly.

Companies who believe they need to hire more testers, or at least to improve their testing might do well to bring on one or two experienced testers who can work with programmers and platform people to build out competencies in experiment design and critical analysis, to use automation and monitoring to augment their cognitive abilities (as opposed to being a safety blanket). Once teams have taken ownership of those fundamental responsibilities, you can look at where else software testers can be a multiplier and the sort of skills and experience you need to hire for.

--

--

ben kelly
Testjutsu

Professional nerdherder. Opinionated middle-aged white dude in the areas of tech things, scotch, various Japanese things, lifting heavy stuff and trading