Welcome to the research tool Olympics 2021: our hunt for the best of the best

Kim Porter
EE Design Team
Published in
8 min readAug 23, 2021

Kim Porter, ResearchOps Manager, shares our rather competitive approach for finding the best research tools for the BT design team’s needs.

If you want more people in your organisation to have the tools, knowledge, and resources to do user research, you’re not alone. We’re on the same journey at BT.

As ResearchOps Manager, my focus for the last three months has been on tooling. I’d like to share some details about how we chose the user research platform we’ll soon be rolling out to those researching within the BT design team.

In this post, I’ll take you through our approach to selecting a preferred tool. I’ll touch on the resources, metrics, measures and research methods we used.

Hopefully, you’ll finish the article with some useful techniques and thinking you could re-purpose to answer your own questions about research tooling.

Setting the scene

First, here’s some context. We have a large design team at BT Consumer. It’s formed of product designers, content designers, content editors, SEO experts, user researchers, and accessibility specialists.

Our user researchers work in a centralised team, focusing on multiple squads and experiences at any given time. As you can imagine, they’re busy people!

Most of our research is fast-paced and evaluative but we’d like to make our user knowledge deeper, by doing more in-depth discovery.

To achieve this, we need to free up our user researchers’ calendars. We’re doing this by enabling everyone in our team to do their own evaluative research.

As the type of research we’d like others to take on is remote, moderated, and evaluative, we focused on finding the platform best suited to that. We have other tools in our arsenal that allow for different types of research too.

Let’s take a look at how we found the right tool for this particular situation. Take note: I won’t name the vendors we used or tested. Aside from not wishing to promote one supplier over another, I want to focus more on our approach to making the decision, rather than the tool itself.

Finding needles in a haystack: choosing 10 tools from everything on the market

If you work in user research, you’ll know there are many exciting, feature-packed tools on the market. It’s a blessing and a curse. So much choice and variety, but lots of decision fatigue.

To whittle down all the tools on the market to 10, we took the following approach:

  • Run a needs analysis:
    We interviewed experienced researchers, people who research part of their design role, research leads, and research session observers about what they needed from a tool. We created a feature list off the back of these needs, broken down into must-have and nice-to-have sections.
  • Desk research:
    We wanted to find out what tools are out there, and used the ResearchOps Community Toolbox as a starting point to see what choices we had. We looked at the websites of the tools that interested us, seeing how many of the features we’d chosen as must-haves were there.
  • Journey map:
    We visualised the research process and mapped out the different ways the tools promise to fulfil each stage of the journey. We found there are 2 main types of research tool setup: either combining a participant panel solution and a videoconferencing solution, or a single tool that covered both.

Following this work, we selected 3 participant panels, 1 recruitment agency, 1 video conferencing solution, and 5 all-in-one tools. We also took into consideration that we already had access to other videoconferencing solutions at BT.

Diagram showing the research process from planning to recruitment to the research day itself, followed by analysis and storage of lessons learnt.
Journey map showing our rough visualisation of the research process

Research tool Olympics: getting 10 tools down to three

We wanted to be able to actually try a few tools before committing to a contract (procurement processes at BT are complex and although we’re trying to be more agile with shorter contracts, it’s still a big commitment). And although 10 tools were better than infinite tools, it was still too many. We set ourselves the aim to bring the 10 down to just three.

We started having conversations with account managers to get more detailed information that was unavailable on the product websites. We watched demos and asked the same questions to the teams representing each tool.

I began conversations with other ResearchOps and DesignOps folks to ask if anyone had experienced working with the suppliers in question, what was working well for them, and what could be better.

Following these two sets of conversations, our confidence that all-in-one tools would be the best for our situation increased, and a few dropped away due to some key features not being present. We were left with 3 all-in-one tools to trial.

The grand final: getting three tools down to one gold medallist

With these three tools selected, it was time for more researcher reinforcement. We built a core team with researchers Katharine Johnson and Wendy Ingram (codenamed the ‘Tooling Heroes’), and we put our heads together in a workshop to figure out our research questions, our metrics, and our measures.

Our research questions enabled us to focus on what we wanted to find out from the trial. Our metrics told us what indicators of a successful research tool could be. And our measures allowed us to see how each tool performed individually and compare the tools’ performance.

Here’s what we settled on:

  • Research question 1: Which tool performs the best?
    Metric 1: Participant quality
    Measure 1: % of no show rates, % of suitable, expressive participants, % of participants that genuinely met our screening criteria
    Metric 2: Satisfaction (for full-time researchers, and session observers)
    Measure 2: % of trial participants say they actively want to use that tool again, % of trial participants saying they would recommend the tool to a colleague
  • Research question 2: Which tool is the easiest to use?
    Metric: Usability (for our pilot researchers, who have just learned to do evaluative research)
    Measure: EES (Effectiveness, Efficiency, Satisfaction) overall scores, pass/fails, and areas of difficulty or ease
  • Research question 3: Which tool best meets our needs?
    Metric 1: Function
    Measure 1: % of must-have functionality fulfilled on comparison sheet
    Metric 2: A11y commitments
    Measure 2: A ranking of efforts and progress made to improve the experience of researchers and participants with access needs
  • Research question 4: Which tool offers the best value?
    Metric: Price
    Measure: Price per participant
A poster showing our EES framework. It reads ‘Effectiveness’, ‘efficiency’, and ‘satisfaction’. Under each section there are ticks, crosses, or exclamation marks to indicate whether a service passed with no issue, failed with a minor issue, or failed with a major issue.
Our EES framework

With our research questions and metrics in mind, Katharine, Wendy, and I thought about research methods.

We’d already compiled feature lists for each of the three tools and quizzed suppliers about accessibility. We had a breakdown of cost per participant following our chats with account managers. So, we needed methods to help us gather data for the two remaining questions.

For the question ‘which tool performs the best?’ we enlisted the help of six user researchers, who would each run two similar projects with similar participant criteria through the tool assigned to them.

They would fill out a questionnaire after each time they interacted with the tool, stating what they did, and ranking how satisfactory the interactions were, before letting us know if they’d use the tool again, or recommend it to a colleague based on their experience that day.

We tried to assign tools to researchers who had no experience of using them previously, to make it as fair as possible.

We built a similar questionnaire for session observers to fill out after attending sessions run on the tools, to get a sense of what their experience was like.

For the question ‘which tool is the easiest to use?’ we decided we’d use our Effectiveness, Efficiency, and Satisfaction (EES) usability matrix to measure how good each tool was at helping guide and support our newer researchers in setting up an evaluative test. You can read more about our EES methodology in this Interview with Toby Yakubu-Sam.

Running the trial

This was the simplest part, we’d done so much planning that it pretty much ran like clockwork. This was no small feat when we consider that including research participants 81 people were involved!

Over the course of two weeks, the 12 days of research performed by our amazing full-time researchers took place, as well as the three days of usability tests of each of the tools, run by myself, Katharine, and Wendy.

Our researcher and observer scorecards filled up, along with our EES matrixes, and we ended up with a wealth of data to add to what we’d already collected.

Of course, there were some snacks and motivation provided about halfway through the trial, in Covid-safe mini care packages sent in the mail. Well-deserved by everyone who took part. Also thank you to the researcher who had the forethought to take a photo before attacking the sweets — not sure I’d have managed that!

A packet of Haribo Starmix sweets, a small card with the word ‘thanks’ written on it, and a striped coaster displaying the words ‘ResearchOps hero’ are laid on a table.
A care package was sent in the mail to the researchers participating in the trial

The results

Analysis isn’t my strong suit, so I’m lucky that Katharine and Wendy were on hand to make excellent sense of the data we collected.

We met together to discuss the information and make sure we were aligned on what was important for us to call out. Then we divided the data between the three of us to work on separately, before regrouping to playback and discuss our separate findings. In the same session, we weighted our findings before merging them into our overall results.

As mentioned before, we can’t share the vendors' names here, but this is what we ended up with:

A multicoloured leaderboard shows how well each tool performed against specific measures, all organised by research question. Tool C did the best, taking first place in 5 categories out of 9. Tool B came second, and Tool A came last.
The research tool Olympics leaderboard

‘Tool C’ became our preferred research platform, and we’re now working with other areas of BT, and procurement, to secure enough licences to go around all our colleagues who do research.

This was a closely fought trial, and the other tools did also perform well. Tool C just edged ahead in a few areas for us that were important for our intended use of the platform. We’re so thankful to all the account managers who worked tirelessly to get these trials set up, if you see this post, you know who you are. Open and fair competition like this makes sure we get the best tool for the job.

What’s next?

Now procurement is underway, the next steps for me and the research team are to think about how to roll out our new platform.

You may have seen our previous posts How to make a researcher Part 1 and Part 2 by Francis Webb and Dahni Maisuria; these tell the tale of how we’re working on increasing our research capacity through democratisation. Those who participated in the course and have graduated have become pilot researchers, and we’d like them to be the first to gain access to the research platform once it’s ready.

The rollout and further training of our pilots is a big project and we want to make members of the design team feel as supported as possible on their journey as they continue to learn to be researchers. Look out for a post detailing how this all goes a bit later in the year.

--

--

Kim Porter
EE Design Team

Research Operations at Skyscanner, previously at Monzo & BT. They / them. @kimlouiseporter on Twitter.