The Evolution of Testing

I started my testing career eighteen years ago, mostly by accident.

I was a programmer doing tech support for a multimedia conference when I struck up a conversation with one of the presenters. Peter rang me up a couple of weeks later and said “We’re starting a testing centre, why don’t you come and help us set it up?”

“But I don’t know anything about testing,” I protested.

“That’s okay,” said Peter, “we’ll fly you to Seattle so you can learn.”

How could I refuse?

So after helping set up the hardware and network for what was to become the “Australian Multimedia Testing Centre” (now Access Testing in Sydney), I flew to Seattle, to a place called STLabs. They did testing for Microsoft and Adobe and a bunch of other software companies on the west coast of America.

And there, amongst many interesting characters, I met James Bach.

James was the Tester Emeritus at STLabs and I worked with him briefly and did a week long course on testing — what he would now call ‘Context Driven Testing’ — with him and 20 other people.

I owe James a debt of gratitude for starting me out on my testing journey.

Now I hasten to point out that I don’t agree with everything James says, in fact as I sit here and look at the certificate from that testing course I notice it has “Thanks for challenging the material!!!” scrawled on it. If I recall rightly the point of discussion came when I pointed out that “implicit specification” was an oxymoron and James asserted that he was using the words “not in the usual English sense — but in special sense” — and that got me going.

In particular I’m not sure I agree with some of James’ personal attacks in pursuit of what he sees to be the truth… but then again I’m not known for my subtlety either and people in glass houses shouldn’t throw stones.

I once attended a round table at a testing conference in London where James opened a round table by announcing “I love coming to England, it’s like going back twenty years in software development!” — a lively (and entertaining) debate followed.

But I do owe James a debt.

A debt, because even at the early stage of what has become “context driven testing”, it opened my eyes to the concept that the best tool for a job is one that fits the problem. This is in stark contrast to so much of what is taught in testing, software development or even IT these days. Every consultant and champion of a particular ideology trumpets it as the best possible answer to everything under the sun. While ideological warfare is not unique to software development, we do seem to excel at ignoring evidence and riding whatever wave comes down the pipe.

Eventually my journey in software and testing as extended into areas like Agile and particularly Lean. This gave even sharper context to the concept that there is no “right way” to do something, only slightly better or worse ways and more often than not the distinction is unclear. Nowadays it causes me almost physical pain when someone refers to “Best Practice” and I often have to restrain myself from physical violence when someone says “We tried that once, and it didn’t work.”

The point is not whether or not it worked but what you learnt from it.

You can learn more from failure than from success and to be a professional is to continually learn, to upgrade your toolbox. If you simply stick to an idea and stamp it out — cookie cutter style — throughout your professional career then you are shortchanging yourself and your customers and setting your self up for obsolescence.

Why do I mention this?

Because I see an awful lot of bad testing around.

I see people writing bad test cases, test plans and defect reports.
I see people wasting time and money and delivering software that is often of so marginal value that it would have been better not to try in the first place.

I used to think the problem was the development methodology : waterfall over incremental over iterative over rapid prototyping etc. And that testing was just the unfortunate customer of dysfunctional processes.

But now I think the testing profession must shoulder its portion of the blame.

Testing has lagged the trends in the evolution of software development but now the chickens are coming home to roost. As the IT/software industry matures and solidifies, the pressure to deliver more efficiently mounts.

No longer is it sufficient to spend millions of dollars on the off chance of a break-through software product. Now software must continuously deliver value, at cost and scale with ever increasing efficiency. This trend will only continue. Look at mature industries, like car manufacturing, where margins have been continuously squeezed for decades.

The crunch is coming for software, and for testing in particular.

I recall one embarrassing conversation where I had to front the CIO of a major Australian corporation and explain why a one field upgrade to his CRM was going to cost him a minimum of $100,000 in project costs. The answer was, of course, that we had to test everything and since there were no less than five separate test phases (run by different departments) the costs did tent to mount up. While I defended the professional line — it wasn’t worth the risk to do it cheaper — it did rankle and it nagged me for a long time. Why couldn’t we do it cheaper? Were there seriously no options other than a $100K price tag?

Well now we have options.

The convergence of agile, continuous delivery, DevOps, platform-as-a-service, cloud computing, serverless and a proliferation of very good and very free tools, as well as the standardisation of interfaces and protocols and the rise of open source have all contributed to an unprecedented level of flexibility in IT.

And the early adopters are reaping the rewards.

Because it’s now a big industry ($400B worldwide in 2013), the rump is slow to move. There are still people who are doing things the same way they did in 1990 and they survive. They will continue to survive for many years, but they won’t prosper.

We won’t have revolutionary change, but we’ve now have the slow, grinding, relentless mill of efficiency and margin. The big will tinker away at the edges, the small will innovate and accelerate and the middle will get squeezed from both sides.

This will apply to professions as well as organisations.

Those who don’t, can’t or won’t learn will be left behind.
The ability to adapt and adopt new tools will become a mandatory minimum for testers and those who cling to purely procedural ways of testing will find themselves, like blacksmiths in the industrial revolution, obsolete.

Anybody sitting on the sidelines with only a hammer in their hands is going to have to have trouble finding enough nails.

Yes, I owe James Bach a debt for keeping my eyes open.

It could have been a quite different career.

Originally published at nickjenkins.net on February 18, 2017.

--

--

A thinker, writer and consultant with a passion for things Lean & Agile.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Nick Jenkins

A thinker, writer and consultant with a passion for things Lean & Agile.