How we used Pivotal Tracker to work with a remote QA team

If you work with/for an enterprise tech company, then you probably work with a QA [or QE] team. And they are probably really important to your bosses’ boss. But when you’re trying to release software following an Agile model, finding the right place for a QA team can be a challenge.

This happened on the last project I worked on. A few months into the engagement, the client introduced us to their “QA team” and tasked them with checking on the app we were working on.

This felt a bit strange because the process we were using on the project didn’t involve a QA team. We use Test Driven Development, user stories, and conversation in place of lengthy requirement docs. And when it comes to bugs, we rely on all team members to find and raise bugs throughout the dev process. So we weren’t exactly sure how we wanted to work with the new team.

The Main Challenges Were:

  1. Time Difference: The time difference between Boston and Shanghai
  2. New Process: Introducing the QA team to our dev process of regular releases and limited “requirements / documentation” in Pivotal Tracker
  3. Building trust: Remotely trusting and valuing a team

How we succeeded and where we fell short with each challenge:

Time Difference

The good: We set up weekly calls at 9AM EST where we discussed product and technical questions with the QA team. The first couple of weeks focused on general set-up, but eventually focused on product questions and bug priorities.

The not so good: Without a working-day overlap, it was difficult to keep the QA team in mind when we came across blockers. We often forgot to update them when the API stopped working or the sample data was erased from the app. That means they walked in to an empty environment and lost a day of testing. We also lost days in the back-and-forth email communication. We would have definitely benefited from a co-located team, but did our best to work with what we had.

New Process

The QA team was used to reviewing extensive waterfall requirement docs and writing their own tests to identify potential bugs. Our team focuses on small user stories with incremental value, test driven development, and all team members reporting bugs. The QA team was used to seeing a pause in development and reviewing an entire “Release.” While we release new code almost every day and had no plan to pause development.

The good: After some explanation, the QA team started reviewing all of our “DONE” stories in Pivotal Tracker. They matched each bug they found to the tracker story we completed. This meant we didn’t have to stop development, but we weren’t blocking their reviews.

The even better: As our product progressed, some of the earlier functionality was completely replaced and re-written. So as the QA team went through stories from the beginning, they learned and accepted how our team changed functionality to match new understandings of user needs. As opposed to creating new bugs for these items and sticking them into the project tracker, we were able to discuss the functional needs based on our most-up-t0-date understanding

Building trust:

It’s really easy to assume that the QA team is going to come up with bugs that aren’t relevant or high priority. I know we had some bias walking in. But when we started looking into the bugs they were reporting, we found some really helpful issues that we were able to work through collaboratively.

The good: We gave the QA team read-only access to the backlog because we were concerned that it would flood the backlog. With access to the backlog, the assigned each reported bug to a real story number. Then, as we added their bugs to the backlog and icebox, they could track our progress in real time, without back and forth emails.

More good: We got to work together over a long period of time so we gave ourselves time to adapt. Over the 3 month period, we found that the weekly check-ins built trust among the two teams. As the QA team learned our flow over 3 months, their weekly emails became increasingly relevant. As we started to understand their rhythm, our weekly calls went from 1-hour to 15 minutes.

The even better: In one of our final passes before as a team, our app passed with a 98% pass rate by the QA team with no Major issues. This mean that the client stakeholder was exceptionally pleased by our performance, without having to disrupt our dev process of iterative releases.

Key Takeaways

  1. QA teams can adapt to new processes.
  2. QA teams can raise valuable bugs that can be fixed in an iterative release schedule — even with Agile teams.
  3. It’s hard to work with a remote team. It gets better over time, but I would still prefer to work with a co-located team.