Test Case Management Tool: how to make a choice you won’t regret

Ilya Gorshkov
red_mad_robot mobile
6 min readJan 20, 2016

The development of any complex program entails a lot of project documentation. The structure of this is broadly the same from one project to the next: functional and nonfunctional requirements including SDS and PRD documents, architecture-related documentation, even the source code with comments and descriptions, QA documentation, project plans and reports, and so on.

Today I’m going to talk about the artefacts that are part of the QA process at Redmadrobot: test strategies, test plans, test runs and test cases, the traceability matrix, bug reports, productivity and quality metrics, test reports and so on. They fall into a certain hierarchy, are created in a particular order and they need to be actualised on a regular basis.

There are a number of different ways one can go about working with QA artefacts. For example, you could use Excel and Google Docs, track everything in a bug tracker (using the Test Case as the Issue Type) or use a commercial solution that is integrated with the company’s bug tracker. This is the path we chose at Redmadrobot. The decision was based on project-specific requirements, the types of QA documentation we create and the testing we do, the volume of manual test cases we develop and execute, and the number of projects we run in parallel.

The next step was selecting the right test case management tool. This is a big decision as the potential cost of a mistake is significant for the company. There are also high risks associated with choosing the wrong tool without due evaluation. The Redmadrobot QA team took a multi-step approach to the selection, starting with a definition of the selection criteria.

These are the criteria we ended up with:

  • integration with the company’s out-of-the-box bug tracker (JIRA)
  • the ability to create and edit test cases, including the ability to import previously created test cases
  • requirements coverage for each test case
  • how easy it is to create Test Runs and Test Suites, and how user friendly the interface is
  • the ability to keep all Test Development and Test Execution artefacts in one place and create a single workspace for the whole QA team
  • the ability to create a Traceability Matrix
  • the ability to assign tasks to specific QA engineers
  • how easy it is to generate reports, metrics and statistics
  • the process of installation, deployment and support

What we were choosing from:

After defining our criteria, we looked at the most popular tools on the market which met our expectations. Upon closer inspection, we ruled a few out and got evaluation licenses for the rest to continue our investigation. In the end, we shortlisted three tools that we used in pilot projects:

1. Zephyr

2. TestRail

3. Meliora

After finishing the pilots, we had a clear understanding of what each tool could do and how good it was to work with given the licence cost, we understood the tools’ key features and how useful they were to us. These are the high-level characteristics of each tool:

Zephyr (1 user — $30) — the key thing about Zephyr is that it is an Atlassian product, which means it should work really well with JIRA. But this is true for JIRA Server, whereas we’re using OnDemand at the moment, and this caused lots of headaches during the pilot. What’s more, Zephyr does not rate highly on usability: adding test cases takes ages and is not particularly straightforward, and creating plans and runs involves a lot of extra steps.

Meliora (1 user — $25) — also requires migration to JIRA Server, plus Meliora is a fairly cumbersome instrument, which artificially complicates most simple tasks. And it has its own bug tracker.

TestRail (1 user — $20) — a simple and easy-to-use tool. The biggest advantage is the option of customising pretty much everything, and that all steps are intuitive. Test cases can be imported and exported with just a few mouse clicks.

Given the pilot results and the feedback on each tool, we decided to go for TestRail, which:

  • Lets you create, store and edit test cases, manage test plans, launch test cycles and collect test metrics
  • Easily maps to the test case format we were already using, which helps us assess requirements coverage and generate the materials required by the project team
  • Lets you create all kinds of different reports, from Defect Summaries to Comparison for Cases reports, to test results based on project/component/milestone, etc
  • Lets you fully customise a dashboard and easily extract details on the status on the work of the QA team for different periods (which helps compile the weekly/monthly QA report)
  • Easily integrates with JIRA
  • Has a reasonable price
  • Offers great support
  • Provides a simple and straightforward way to import tests from Excel

For us, being able to import previously created test cases was a key criterion. A few months before we started to look for a test case management tool, we came up with a set format for our test cases in Excel. We tried to create something generic that we thought should easily fit into any tool. To our great surprise, we could import our test cases “as is” only into TestRail. With TestRail, we were able to import several thousand test cases in just a few clicks, which dramatically reduced the time and effort involved in the deployment.

Let me now look at what TestRail offers in terms of specific areas of QA work:

Test Development:

  • creation of test plans/suites/test cases;
  • an easy way to store, update and organize test cases;
  • import and export functionality and the ability to edit test cases;
  • an easy way to adjust test attributes;
  • Requirements Traceability.

Test Execution:

  • milestones (based on quality criteria in the company);
  • smart way to create and support test runs;
  • the ability to raise defects from test runs;
  • the ability to assign tasks;
  • easy integration with JIRA

Test Management:

  • monitoring activities;
  • assigning roles;
  • assigning tasks and tracking their completion.

Reporting:

  • test execution progress;
  • test results in ready-to-go reports;
  • project statistics;
  • different types of reports;
  • team productivity metrics.

To sum up:

We made our final decision back in August. In September we moved most QA activities into TestRail. We are migrating more and more projects there, and we have not had any second thoughts about our choice. We have collected QA metrics that prove we made the right choice. We have been able to quickly train up our team to use the tool. We will now be finalizing the implementation of Requirements Traceability for all projects and will be continuing to build momentum in on our work with TestRail.

--

--