Is it time to update your product design team’s tech stack?

Jessica Pelletier
Inside Q4
Published in
7 min readJul 25, 2022

Here’s the selection and evaluation process we developed to guide our software purchase decisions.

Soon after I joined the product design team at Q4, I was presented with an opportunity to look into some new tools to level up our research stack.

The existing tools weren’t working anymore for the growing team. The valuable research we were conducting was becoming difficult to find and re-use. There was no streamlined way to analyze qualitative data, and the data we were gathering from usability testing was one-dimensional and slow to translate into insights.

We chose two new research tools to help with these problems. One is a research repository for storing and analyzing our research. The other helps us quickly gather insights from testing.

Getting to that point took some serious effort though. We did not have an instruction manual on how to assess and implement new software for the team. So we set out to create one. Hey, software ages out — requirements and budgets change. The tools we bought will eventually need replacing and the next person to pick up the task will benefit from my experience, and possibly you will too.

Here are the steps we’ve developed so far to choose new tools to enhance our work as a team.

Where to start?

There’s a big world of software out there. To avoid becoming instantly overwhelmed, lean into the skills you already have as a product or development professional.

Approach this task as you might any design project: by starting with the problem. I listed the reasons we were software shopping above, but I didn’t get there alone, I discovered them with the people using the tools.

You can do this any number of ways, depending on the size and culture of your team. Consider:

  • Conducting 1:1 interviews with key users
  • Sending around surveys using Google Forms
  • Hosting workshops using Miro

No matter your data collection method, the purpose is to identify pain points team members are having with the tools they currently use and what they wish they had instead. Information-gathering at this stage sets the foundation for the entire evaluation process.

In Q4’s case, I organized a workshop using Miro. This is a popular activity within our team’s culture and so it made the best sense for us. We like it because it is efficient — it enables us to brainstorm on troubles and preferences, group these points into themes, and then summarize our findings into requirements. The information from this workshop formed the scoring criteria for our evaluation process.

Pro tip: For the evaluation of our second new tool, we voted on our must-have features. Features with more votes were weighted more heavily during the initial evaluation round.

How to create a shortlist

Before you create a shortlist, you have to establish a longlist.

Begin with surveying the vast landscape of available tools. Do not be afraid!

The research you have done up to this point will provide you with an efficient lens for spotting suitable options from within the mix — without getting overwhelmed. It is the difference between grocery shopping with a specific recipe in mind, and just walking in hungry.

I also solicited recommendations from team members who had experience with similar tools, or awareness of good solutions through their professional networks.

This resulted in an initial longlist of potential vendors that could meet our needs.

The next step was scoring each tool against the criteria we developed. Not as easy as it sounds, but not impossible, either: You just have to start digging for information. This can be done by combing through the tools’ websites or by communicating with the companies directly through their website chats — or even by talking to salespeople.

Pro tip: I’ve since learned that you can ask companies to complete an assessment of their tool against your criteria. This could be a great time saver but it might yield inconsistent results across tools.

I created a scorecard that included our criteria and their respective weighting to record the results of the evaluation.

Tools received a “yes”, “warning”, or “no” score for each criterion. A yes equalled the full weight of the criteria, a warning equalled half the weight of that criteria, and a no equalled zero points.

When tools scored the same across a particular feature (e.g. all tools had a yes or no answer for particular criteria), it was helpful to remove those columns from the scorecard to get a better visual focus on the remaining differentiators.

Budget was also a consideration. I assigned a low or high price rating to the tools to get an idea of how they stacked against each other on this dimension. Some companies don’t show pricing for every plan available, so there is sometimes extra work involved to find out all the information.

Pro tip: Don’t assume any prices! Some companies hide the prices of their higher-tier plans or make them infinitely negotiable. If you know which plan is likely to be the best fit, get that pricing information right away to avoid any surprises. One of the tools we assessed had relatively low pricing for the first two tiers but jumped up significantly for the enterprise-level plan.

Once the evaluation was complete, I met with stakeholders to share more about the top scoring tools and their prices. This element had a big impact on turning a long list into a shortlist. Following this discussion, we had a list of three tools to move on to the next round of evaluation.

The long-list scorecard. Stars represent the weighting system.
The scorecard showing only differentiators and price ratings (lower scoring tools were removed).

How to decide on the right tool

It isn’t enough to just fill in a scorecard. Software mistakes are expensive, so you need to be sure your team is satisfied with the selection. Engagement also fosters buy-in and buy-in fosters adoption. If the team won’t or can’t use their tech, the entire exercise is for naught.

Our team was ready and willing to roll up their sleeves to run trials of the short-listed tools. We took advantage of the free trials offered by each vendor so our team members could play around.

Pro tip: Some free trial periods were longer than others, but we found the companies were very accommodating with trial extensions.

We kicked off the trials with a learning session to get everyone acquainted with each tool and hop over the learning curve. We then agreed on a timeline for completing the evaluations. For us, this was two weeks.

Some trials had feature limitations, but in general, we could access enough to get a feel for how well each tool worked against our requirements. Where we couldn’t try something we wanted to, we supplemented those info gaps in conversations with the company.

We used Miro again to record our thoughts and overall impressions as we tried each options. It was helpful to have a standard set of tasks to go through with each tool to make sure the comparisons were equal. For example:

  • Create a new project
  • Use a template
  • Try the mobile app

Pro tip: The trials step can be a real time-eater. After our first attempt, we found working in pairs helped make completing the work more manageable.

After the two weeks, we came back together to discuss which one was the best fit. Referencing back to our original needs assessment proved to be a good reminder of our goals during this discussion. At last, we held a vote to decide. Both times, the team’s choice was unanimous!

An example of our trials board in Miro with everyone’s feedback.

Wrapping up after the decision

The final stretch was sharing the team’s decision with stakeholders and preparing for the tool’s implementation and rollout. I created a summary of what the chosen tool would deliver and how it would solve for the team’s pain points to back up our recommendation.

Pro tip: Start creating a rollout plan as soon as the tool is approved without waiting for the procurement process to be completed. It can take some time to create internal training documentation and plan how the team can start using the tool, especially for more complex solutions.

From here, my project was completed, but the journey was really only beginning for the team. Implementing new systems is a project unto itself and requires care, patience, and ongoing communication. We have held retrospectives to talk about what is going well with implementation and what needs improvement.

Our approach to tool selection and evaluation is part of a desire at Q4 R&D to continually improve the way we work. It’s an evolving process that we have honed over time and that we will no doubt iterate on in future.

What tips can you share with me about your tool selection process?

--

--