Thoughts on Software Quality drivers
--
Let us make one thing clear from the beginning — software quality cannot be measured in number of test cases, maturity of testing automation nor state of code coverage. All of this is important to achieve the goal, but very often it becomes a goal of its own.
A scenario that plays out too often is that delivery teams are being told that the quality delivered is not good enough, so time needs to be set aside to change it. Plans are being drawn; KPIs created; estimations made; workshops; technical spikes and so on… this lasts for a while and delivers some value, but it does not take long before it is decided that enough time has been spent already and focus needs to be put back to the functionality… But do not worry — soon enough someone else will ring an alarm bell related to quality and the cycle will restart 😊
So why is it so difficult for teams to “maintain” quality? I went through, and believed in many answers myself: business never prioritizes non-functional requirements; technology we use is not good enough; we do not have right mindset; etc. After plenty of research, discussions, and feedback I am ready to make this assertion: we measure the wrong thing.
What if we stop thinking about “quality” and start thinking about “confidence level” instead?
How confident are we that when we release software everything will go well? Deployments will be successful and functionality will work as expected.
Many organizations ship software to production in long iterations — weeks or months. Even if they follow frameworks like SCRUM they still treat “go-live” as a stand-alone event with dedicated testing phases preceding it… this allows development teams to not worry about quality daily. After all, there will be a testing period when it really matters.
I would argue that this happens due to a simple reason — confidence level is low so acceptance testing and bug fixing phases need to be put in place. And it is not wrong — better do that than deliver faulty software.
Now, to break the vicious cycle, rather than asking the team to improve quality, consider giving them the following goal: ship code to production daily.
This suddenly shifts the need for a high confidence level right into the development process. Automation will, by nature, become priority number one. The topic of value and quality of various quality controls will be more prominent (do I need code coverage everywhere or only on complex code?). The need to improve will be internal, not external.
Don’t get me wrong — all the planning and technical decisions mentioned at the beginning will still need to happen. But this time it happens in order to deliver a clear goal rather than an abstract “better quality”. Once done, delivery teams should be confident to ship code at any time and business will be able to enjoy extremely short cycle — forget about hot-fix release… everything is now a hot-fix release 😊
To sum it up — the drive for quality needs to be anchored in a tangible goal. Confidence to ship code to production daily is one example. I am sure more can be found.
But what will most likely not work is setting goals around some quality report metrics, number of bugs in released software (what code does not have bugs?) or number of automated test cases… all of that simply drives different outcomes…