Test Automation Coverage — A perspective
In Agile development, test automation plays a key role in catching regression bugs and saving time and effort in testing. However, not knowing the number of requirements covered by the test automation suite would become a blind spot for everyone in the team.
Each sprint, the development team adds/updates/removes tests to make sure there’s relevant test coverage in the code and the application. A statistical measure of areas in the application exercised by the tests form the idea of test coverage.
In this article, we’ll explore measuring test automation coverage in Agile projects.
What is test coverage?
In short, test coverage is a statistical understanding of the number of requirements (or any other items) that are covered by tests. To measure coverage, data should be derived through test monitoring in each sprint.
Test monitoring happens throughout the sprint where engineers determine the following:
1. the number of features ready for testing,
2. the number of test cases designed,
3. the number of automation tests written, etc.
At the end of the sprint, the data collected should give us an overview of the number of functionalities/features that have sufficient test coverage. A test coverage report can be generated and documented for further discussion and enhancements. It can also be used as an auditing document during periodic audits and compliance checks, and statistical data from test coverage and automation reports can provide proof of due diligence and maturity of your software testing practices in the project.
How to measure test coverage?
The most popular test coverage metrics known to the development team is the code coverage derived from the unit tests. However, let’s briefly explore beyond unit tests in this article.
There are different test levels from which we can derive test automation coverage. To dig deeper, let’s consider the following tests: Unit tests, API tests and UI tests (feel free to include more tests as required by your project such as accessibility testing, cross-browser testing, etc.)
The metrics you can derive from each test level are as follows:
Code coverage data from unit test data can be derived from the test framework/library itself, while other test levels require additional manual effort, especially in higher levels of testing such as UI testing.
With unit testing libraries like jest, code coverage reporting is available out-of-the-box. With proper configuration, an automated process can be set up in your local machine or C.I server to generate reports that provide information about code coverage. Statement coverage, branch coverage, function coverage and line coverage can be derived from these reports.
However, bear in mind that code coverage reports are purely quantitative. We also need a qualitative way of determining if unit test coverage is sufficient with respect to the requirements. During code review with peers, different use cases for the component/function can be determined, and use case coverage metric can be derived by dividing the number of use cases covered by the unit tests by the total number of use cases with regards to the requirements.
In API testing, engineers always refer to the API specification document as the test basis to write tests. Tests generally check for field validations to verify data correctness and proper error messages in the response. From the automated tests, you can derive field validations coverage and error validations coverage.
A single API can be used to process more than one data profile. For example, a single GET API can be used to retrieve different kinds of data by setting a specific header or query parameter. In such cases, profile coverage metrics can be used to understand if the automated API tests have covered such conditional flows generated by different user profiles or form profiles or configurations.
Deriving API test coverage can be automated by using tools like swagger and zephyr/xray. However, the test design process is purely manual, and requires initial effort to come out with a baseline set of test cases.
Automated UI tests (sometimes called E2E tests) are designed based on the manual test cases written every sprint. Requirement Traceability Matrix (RTM) can be used to understand how many requirements are covered by the automated UI tests. Requirements (or user stories) might also have acceptance criteria (ACs) in an Agile context. In such cases, dividing the number of ACs covered by automated tests by the total number of ACs would give us the requirement coverage metrics.
Also, dividing the number of automated tests written by the total number of manual test cases would give us the test case coverage metrics for UI tests.
In terms of importance, requirement coverage metrics have more weightage than test case coverage metrics, as all requirements are important from a business standpoint, while all test cases (especially edge cases) do not necessarily need to be automated.
100% test coverage does not guarantee a defect-free application as that assumption actually violates one of the principles of software testing — “Exhaustive testing is not possible”. Instead, test coverage data should be used to improve and optimise your test suite, and acts as an indicator of the number of requirements covered by tests.
Achieving healthy test coverage with your test automation suite is a balancing act. Having too many or too few automated tests also has its effects as illustrated below:
We hope you found this information useful. Please take care and see you in our next article!
🧙🏼♀Team Merlin 💛
Application security is not any individual’s problem but a shared responsibility.