When do you find your most important bugs? My top 5 below

We’ve been enumerating, analyzing and doing a retrospective of all the tools and practices that we do in our Mobile team and in the process, one of the major artifacts of the project turned out to be bugs.

This enumerating included testing activities by various sources internal testing team, external testing teams, bug bashes, actual users(reviews), crash reports, acceptance testing, static tools, unit testing, automation etc..

The defintion we use for a bug

A bug is a when a product doesn’t meet an expectation

Now, I want to reflect on how good bugs are found. Not all bugs are the same and many get closed as won’t fix or not a bug.

By good, I mean a bug that is quite obvious — that a team can agree upon to be fixed in this sprint or the next.

Below are my top 5.

1. Testing within the sprint

Finding bugs with in the current sprint as the devs are working on merging code get my #1 vote. This is the time when a tester can have most impact and can get them addressed. Even minor UX bugs are possible because the dev is churning up code as we speak. If we twist the definition a big, we can consider code reviews/pull requests part of this category because I found some great bugs like null pointer exceptions, race conditions, internationalization bugs etc pointed by great code reviewers.

2. Bugs found accidentally during creating UI automation

This is a surprising category that gets my #2 vote because more than once I found bugs when writing automation. UI Automation has to be very precise and executes scenarios E2E as a user would. Many a times I found high priority bugs that I found serendepetiously, while creating automation eg: A crash that only happens on a particualar device (because that happens to be the device that I am writing automation with), a hang that happens only when the user is doing a persistant login (because only automation flavor persists login),

I would categorize this time as a sort of explortary testing with high return on investment. Because of this reason, I am still advocating on writing/editing/maintatining test automation even though number of actual failrues caught by automation are minimial.

3. Bugs found by end users

This didn’t get first spot because end users bring in a lot of noise and assumptions on how a product should work. Even though they are the ultimate arbiters on whether a product is success or failure, they are just too many kinds of them and it is not easy to categorize them and satisfy all of them. A glance through the app store reviews tells us it is not conceivable to get a 5 star rating from ALL the users for an enterprise app.

Also with a little twist of definition, we can include customer monitoring crashes, data and AB testing into this category, if correctly done I presume this bucket will start to get higher ranks.

4. Bugs found by stakeholders

By stakeholders, I mean people outside the core team but still have some stake in the finished product. This can be executives, support staff, writers folks who have minimal impact on the product but know enough to care about it. Some times beta/alpha testers can belong to bucket. The important thing to note here is they care both about the product and the team. So they give good repro steps and actually be reasonable rather than just venting out their frustations unlike end users.

5. Bugs found during acceptance testing

Testing done by Product owners is acceptance testing i.e. They are accepting the delivered software as acceptable in their own terms. This seems like an obvious top 5 and is only ranked lower to stakeholder testing because product owners get very familiar with product and loose their criticial mindset as the product matures.

As an automation engineer, who has a developer title in his job, this list is somewhat surprising(other than #2) because none of the them actually involves write code for testing — UI testing, unit testing, functional testing. More on that to follow.