Why is important to classify and to prioritise your bug list?
This seems to be a very obvious topic, but in head of the battlefield, sometimes it gets all blurry. So, let's try to analyse this issue answering the 3 questions below:
- When do you really need any classification/prioritisation system?
- Why should you bother about it?
- How to structure such process?
So, when do you need to worry about it?
That's is an easy one. If you are the lucky one to have a system with very few bugs and also easy ones, most probably you are better of just not worrying about it. But…
As far as I know, you really don't need to have a "rocket science" system to have enough bugs to worry about them. In my experience, even a "modest" system can produce enough defects to make your nights full of nightmares. So, in 99% of the cases, you will need to figure out which bugs should be dealt with first.
Why should you put some classification/prioritisation system in place?
There are many reasons to explain why such system is needed. Among them, it helps the team in two main aspects: it helps to focus on the right bugs first and it helps to get the bug fixing process organised. Without it, you may end up fixing the wrong issues first, and even worse, you may start to rely on the "sound judgment" of the developer in charge of the fix, by the time it's about to be fixed. (Believe me, most of the time it's not that good).
Another good reason is the fact such system aids the balance between bug fixing and new feature development. You can benefit a lot just by putting some basic policy in place such as "in case of any high priority bug, the team should stop all the work to remove the impediment of the bug" or "non-priority bugs should be dealt with just after the new feature development cycle, so you should not stop mid-work".
How can you put such a system in place?
In general you could classify the product bugs considering their impact, priority and/or severity (or any combination of the previous components)
Some companies use a simple 1 to 5 ranking (from very low to very high). The point here is to clearly understand the criterion that rules "why some bug is considered very high and another one is considered very low".
The rule should be simple, easy and tangible for everyone. An explanation of each priority level works well. Something like:
- Very High — System completely unavailable, or experiencing severe service degradation
- High — Main components unavailable, without a known workaround OR key features with degradation
- Medium — Main components experiencing some service degradation, with a well known workaround OR Secondary Components experiencing some service degradation without a known workaround)
- Low — Secondary (or satellite) components experiencing low service degradation and bugs that does not affect how the user interacts with the system.
- Ver Low — Bugs that has no or very small impact on the user experience.
With such a list, it's much easier for everyone to understand that:
- If the portal is unavailable, it is clearly a Very High impact issue
- If you have an e-commerce platform and somehow your users are not able to complete purchases or payments, it's also a Very High impact issue, but
- If the billing system is not able to process instalments, maybe it can be "downgraded" to a high or even medium impact issue.
But is using a single dimension enough?
In this case, you can use any other aspect to help you out. Let's say you are running a multi-tenant application, and in such scenario, it might be a good idea to understand how those bugs are affecting your customer base, with a criterion such as reach:
- Very High Priority : It affects all users.
- High Priority : It affects your key users.
- Medium Priority : It affects some users
- Low Priority : It affects just a few users.
- Very Low : It affects only 1 low priority user.
Besides this dimension, you can think of several other aspects. You can use:
- Functional vs Design vs Localisation Issues.
- Functional vs Architectural vs Performance Issues.
- Technical Complexity related aspects.
- Modules or teams responsible (this is specially useful when your team starts to get larger)
- Bugs found by our customer vs bugs found by our QA or Internal team
Putting it all together
So, how can we mix everything discussed so far? My preferred approach is to keep it simple. If you have multiple teams, the "team" dimension is pretty useful to have the right team addressing the right issue. Besides this, both the bug priority and impact may also be very useful. Additionally, you can exchange the "priority" dimension for "probability" if you want to use a "risk-like" assessment. And in this scenario we'll have something as simple as the matrix below:
Using this matrix as a tool, we should definitely worry about all the "red" areas, and keep a close look at the orange ones.
And what about the green spots?
Well, if you have read so far, it seems to make sense for you. So the next point is when I should fix all the non-critical (red) bugs. There are some alternative here.
- You can set 1 developer apart only to fix these bugs
- You can plan an ongoing activity to fix non-critical bugs along the sprints (or development cycles), like 3 non-critical bugs per sprint.
- You can set apart specific sprints (or any other kind of time-box) for this. I'm used to call this time-boxes, Housekeeping sprints.
- You can set the policy of fixing a bug ever before starting a new story.
- Or you can mix all the above.
I have already used all of them, and they work pretty well. Specially for legacy bugs. For "new" bugs, ideally, you should fix it as soon as possible (since it's easier, faster and more cost-effective).
As I have said before, there are a wide array of choices for you to use, and the most important aspect is to have the understanding of your needs and context. As soon as you know where you are, you can choose what to do about. The only real hint I have, is the old good Keep It Simple.