The changing face of delivery, and it’s quality implication …

Mike Talks
TestSheepNZ
Published in
4 min readSep 16, 2018

If you watch any old film about a newspaper, there’s usually a scene where some new story or new detail about a story comes out, and someone screams “stop the press” …

It used to be a big deal. To stop the printing press, abandon what had been published so far and start again had a huge financial impact.

Over the last few months, I’ve been quiet, as I’ve been working on putting the finishing touches on my science fiction novel Melody Harper’s Moon. I’ve tried for about a year to sell, but ended up going “I’m just going to publish it myself on Amazon”.

Available on Amazon

Last night I was browsing my copy of the book on my Kindle, and I noticed a minor mistake. There should have been a page break before an entry.

I sighed, sat at my computer, pulled open the source file, changed it, uploaded the revision to Amazon. With luck, none of my two dozen readers will have noticed.

I then noticed that since launch a few weeks ago, I’ve made ten changes to minor issues like that which have been discovered.

I’ve likewise made a similar number of changes to my testing book How To Test.

It’s ironic because that image of “stop the presses” is somewhat cute but arcane in the modern digital world. The words don’t have the same impact.

You can get a paperback version of my book, which is published on demand, so again it’s not like there’s a huge batch of books which get impacted by a change.

What intrigued me was “this is the publishing version of continuous delivery isn’t it?”. Twenty years ago, we had our own form of “stop the press” issuing software on CD, and if we found significant issues we’d have to distribute patches, possibly on other media to customers.

If our software was sloppy, people just wouldn’t use it, and likely wouldn’t seek out a patch. So we tested thoroughly.

I still see some testers with this mindset, as if we’re issuing and updating using this model.

The other side of the coin is that with continuous delivery if there’s a problem that’s found by the customer, you can patch it fast before customers can see it. Checking my metrics on Melody Harper’s Moon, my mistake was late in the book, and only 3 readers had passed that point.

The problem is we live in a world with multiple software items to often do the same thing. If software delivers a disappointing experience, we don’t tend to hang around and give it a second chance, we tend to move on.

Modern testing unsurprisingly is somewhere in-between, the aim for any service is to provide a relatively reliable (but okay not thorough) experience to customers, but acknowledging that small items can be fixed as you go.

An example would be the go-live validation we’d do for a pre-paid card. Often the changes we’d do would be around the customer balance portal of how accountancy and fees would work. On a night of a release, we’d rarely check that — we went to the basics trying out a sequence of purchases and reloads on our platform. Our customers would forgive us if they couldn’t log in to check their balance … but if the cards stopped working when they were using them, that’s the major service of the prepaid card, and so people would be quite rightly storming away from our services muttering “I can’t even use my own money”.

I’ll often find myself in consulting positions where I’ll be told by a product owner will tell me that they don’t have the money time or resources for old-fashioned waterfall testing (ironically they will often want the same documentation though), that they won’t care if certain features don’t work, but also that quality is important.

But still it’s a start, it can lead to a very fruitful conversation about where testing time should be spent to look for problem areas, especially in features which can sting. If it’s incredibly thinly spread, I need to mention this and the risk involved, but that’s another subject I covered recently in my two part story on being a sole tester here and here.

--

--