Quality Radar: A new way to visualise Quality
Here at Cazoo Tech the mantras around Quality are:
- Whole Team Quality - or as W. Edwards Deming would say “Quality is everyone’s responsibility”
- Continuous Testing — the practice of including timely and relevant testing practices at all points in the software development life cycle (SDLC) and which is described beautifully by Lisa Crispin as “Test early, test often, test in production”.
Embracing these mantras enables us to embed practices related to:
- Shifting Left (aka Baking in Quality) — to ensure risks are addressed early and that our experiments not only prove (or not) our product hypotheses, but also surface information about any “unknown unknowns” lurking in the weeds.
- Shifting Right (aka Testing in Production) — with the aim that we are aware of our software’s impact on our customers and our customer’s impact on our software at all times.
This sounds like the basis of any solid, modern Quality Strategy for which there are an array of beautiful models and blogs already available, such as:
- Lisa Crispin & Janet Gregory’s Agile Testing Quadrants (based on Brian Marick’s Agile Testing Matrix)
- Dan Ashby’s Information and Continuous Testing in DevOps models
- Mike Cohn’s Test Automation Pyramid (or, more usefully, the wonderful critique by John Ferguson Smart)
- Charity Majors & Liz Fong-Jones’ Observability Maturity Model
So why have I felt a need to invent something new? What gap have I found that needed to be addressed? The problem was that these existing models didn’t help me answer the core question:
What does good look like?
and the rapid follow questions:
How do you know if you’ve got it?
How would you recognise it if you saw it?
A fair set of questions to be faced with when hired to promote a “quality mindset” in a fast-paced and ambitious startup. Fair, but not easy to answer, especially in a manner that is both quick to absorb and easy to comprehend.
To be honest, I struggled to answer these questions for a while. That is, until I read a Twitter thread by GeePaw Hill at the end of 2020. In particular I was struck by this specific message in the thread:
Start by noting that we’re talking about *internal* software quality (ISQ), not external software quality. What’s the difference? ISQ is the attributes of the program that we can only see when we have the code in front of us. ESQ is those attributes a user of the program can see.
This is where inspiration struck!
Thinking about the internal and external nature of quality helped me answer “what does good look like” by drawing a picture that focussed on the differences between assess internal vs external quality. That picture was just a simple sketch of two circles, one inside the other, with quality habits (practices, tools or techniques) written onto sticky notes and placed on the relevant circles — for instance TDD for ISQ and Performance for ESQ.
Whilst adding tools and techniques to the diagram, I realised there were some that simply didn’t fit as internal or external software quality habits, for example using Customer Survey feedback as an indicator of where we can make improvements to our systems. To solve this I added a third circle around the first two and filled it with quality habits that related to a “brand” level of software quality, for instance Trustpilot rating or company OKRs.
As the rings filled up with quality habit stickies I started grouping similar topics together (e.g. alerting with tracing, and refinement with retrospectives) and in doing so I realised that there were 4 broad groups of topics. This led to the segmentation of the circles into four distinct quadrants covering Scripted Testing, Observability, Collaboration and Exploratory activities.
At this point the visual started looking more like an archery target or a dart board and the name “Quality Radar” was born
I’ll create a series of posts to share more about the radar including how we are using it to drive quality improvements in product teams. Spoiler alert — it’s not a cookie cutter template that teams must match as each team has a different context.
Catch you next time.