Over the past few years, the software testing industry has changed significantly. Testing is no longer a sequential step in a software development process for most fast-paced development teams. Testing activities start right from the beginning where requirements are being discussed and fine-tuned.
Despite industry shift, core activities of the software testing haven’t changed much; preventing and detecting defects in software before customers do.
As long as humans (developers) are writing software code, there will be chances that defects will be slipped through the prevention stage. Those slipped defects will need to be caught before the production release, i.e. in the detection phase.
There are plenty of ways to perform these two activities, and there are no right or wrong answers. They all are valid as long as defects are not shipped to production :)
Whatever way you choose to test the software, there are some basic measurements (statistics) which you can use to stay on top of the quality of the software application to gain confidence and know the current state of the quality.
Below are some of those common statistics that allow to measure up the quality of the software version.
There is another advance level of measurements to determine software quality, though I will focus on these basic stats that help to communicate the quality of the software to management and relevant stakeholders.
1. Passed: This is no brainer! You need to know how many tests passed from a given test run & number of tests. It indicates the success of a software build that is currently under the testing hammer. Higher the percentage of passing rate, better it is.
2. Failed: Number of failed tests indicate something is not right in the software. It is a trigger point for further actions such as raise bugs and investigates in the code-base. Ideally, there should be 0 failed tests to ensure that no known defects are shipped to production, but it is not always possible in the fast-paced development team where continuous delivery is a priority, minor issues are accepted and “good enough” quality is accepted by team and management. If several minor issues are not managed carefully, it could quickly become a major fragile system.
3. Blocked: Number of tests that cannot be executed due to various reasons; such as unavailability of the feature or previously failed test blocks testing of further tests. It is one of the common stats during the test execution phase. The responsible person must communicate with the team lead/sprint master and raise any concerns. If blockers are not resolved on time, it impacts on your test estimations and sometimes could delay the release.
4. Not Run: Number of tests that are yet to be executed. It is often a default status of a test case in the test run. It indicates that the proportion of total tests that have not been tested and could result in either pass or fail.
Managers & senior stakeholders are interested in knowing:
1. Are there any critical or major failures in the system which could block the deployment?
2. Has testing completed?
3. Are there any blockers?
4. What is the passing rate?
5. Are existing functionalities working as expected?
In addition to the above test execution stats, below bugs/defects, related stats are equally important. If there are several failures but not the number of bugs raised it indicates the gap in the workflow.
No. of tests failed should be related to the bugs raised. Every test failure should be able to link to the raised bug.
5. Open bugs: Each test failure should result in a bug. Sometimes, one bug is linked to multiple test failures, and that is okay. However, it is important to note all test failures in the bug report, so later on, when a bug is fixed, all linked tests can be re-tested. This stat indicates the risk in the software. Depending on the severity of outstanding bugs and risk profile of the team, it could block production release too. A general rule of thumb, if there is a bug in the software that will annoy users in using your software for their day-to-day work, fix them before releasing them to production.
6. Resolved bugs: Number of bugs that were raised have been resolved for this release. It indicates how much code change has been done and what needs to be re-tested. This allows the QA team to quickly assess how much regression testing is required. Remember, every line of code change carry risk! It also indicates the quality of the writing code :)
7. Verified & closed bugs: This stat indicates the performance of both the development team as well as the QA team. It also indicates an improvement in the quality of the version that is being tested. Ideally, no. of closed bugs should increase and the number of resolved bugs should decrease.
8. Reopened bugs: The number of bugs verification failed during the re-testing of resolved bugs. When tester verifies bug fix and bug fix still does not meet the expected outcome, then tester needs to reopen this bug to review and fix. A high number of reopening rate indicates significant issue within the software code-base or quality of the code written. However, if you find another issue due to any additional testing within the same test scenario but with a different environment is not considered as reopening of the same bug. You should create a new bug or talk to a developer. If a developer is okay to reopen the same bug with additional comments to avoid admin work on writing a whole new bug report, then do it.
9. Blocked bugs: Number of blocked bugs indicate possible roadblock towards software release. It indicates various serious reasons are blocking the bug resolutions, such as inter-dependent bug resolution, technical dependencies, etc. It’s of utmost important to resolve blocked bugs or revisit the severity of the blocked bugs — such as how it is going to impact on software release, end-user behavior, can this bug be fixed with hotfix without interrupting end-user experience, etc. Ideally, there should be 0 blocked bugs towards software release.
Bugs related statistics are usually calculated and communicated within the context of the current test run for an upcoming target release. Do not mix this with total bugs raised in the system to the date!
Depending on how QA lead reports to management, this stat can be in the context of a test run, test suite, project, program or portfolio.
Also, the percentage value is really important in all of the above stats as it indicates a proportion of a state from the overall result. These are just a handful of basic yet most useful statistics to know the status of your testing efforts. It helps to answer most of the questions asked by senior stakeholders.