Using RITE to quickly resolve experience issues

Jas Nijhar
EE Design Team
Published in
4 min readJul 13, 2021

User researcher Jas Nijhar shows how a trial of the ‘RITE’ method of usability testing has benefitted the BT design team and improved user satisfaction.

At BT we use a Build, Measure, Learn (BML) approach to help squads rapidly learn how they should achieve a desired outcome for customers. This approach encourages squads to answer the following questions:

  1. “Should we?” — market fit
  2. “How might we?” — problem fit
  3. “Can they?” — usability fit
  4. “Will they?” — behaviour & value fit

It’s when we’re answering question three “Can they?” that we can use the RITE method.

What is RITE?

Rapid Iterative Testing & Evaluation (RITE) is a method of usability testing where issues are identified and solved as quickly as possible. It’s ideal for when you’re testing low/mid-fidelity prototypes. Prototypes are altered during the study. After a few participants encounter a usability issue, the prototype can be updated before the next test. This method allows you to do several rounds of testing in a very short space of time.

This lean research method is all about trying to be as efficient as possible, allowing teams to react to usability issues faster. In turn, it should allow teams to launch or AB test the best version of their product/feature.

What’s involved in conducting RITE testing at BT?

I started by creating a Mural template for any of our researchers to pick up and start using. It includes a step-by-step guide for our researchers, to help them plan out the study and use the template efficiently.

A digital whiteboard showing columns for usability tests you might do, and space to record findings.
Mural template that guides researchers through the testing, including prompts on how to take notes, pre-task questions, and space to record insights and resulting ‘how might we’ statements.

The Mural board also acts as a collaborative space for the wider squad to plan, take notes when observing, do analysis together, and generate ideas for the next iteration.

From there, we take it step-by-step.

Step one

Create a testing schedule. Decide how many testing days you need and when. Plan a day gap between each testing day to allow for designers to make changes to prototypes. If you require more time to analyse the data gathered, add additional days in between testing days.

A calendar grid showing the days of the week, with testing and analysis happening on Monday, Wednesday, and Friday.
An example testing schedule.

Step two

Recruit your participants for each of the testing days — at least 3 participants per test day. Schedule testing for the morning of each test day to allow for analysis in the afternoon.

Step three

Create your prototype/s and add screenshots of the prototype/s into the mural template i.e. “Task” section.

Step four

In the morning of Testing Day One, conduct your usability testing (moderated or unmoderated). Take notes as a team, in the Mural board around the screenshots. At BT, we believe research is a team sport. We actively encourage all disciplines (designers, POs, engineers etc.) to get involved in research.

Step five

Text on a purple background that reads ‘Measuring the usability of journeys with user research’ and then lists effectiveness, efficiency, and satisfaction as categories of measurement.
We use a framework that scores the effectiveness, efficiency, and satisfaction of a journey to understand how it’s performing.

In the afternoon of Testing Day One, complete your Effectiveness, Efficiency and Satisfaction (EES) analysis on the Mural board, using the notes taken. As a team, review the EES analysis and identify key insights. Add these insights to the “Key Insights/Takeaways” section of the Mural. Using the insights, create ‘how might we’ statements in the “HMW statements” section of the Mural. As a team, vote on where to focus & decide what changes to make to the prototype/s.

Step six

Update prototype/s on the following day.

Step seven

Repeat cycle (steps 4–6) for the next testing days in your schedule.

So, does it work?

I trialled this Mural board and RITE process in our BT TV and Sport area, and it’s been very successful.

An example of the Mural template in use, with overall effectiveness and satisfaction scores recorded.
Populating the Mural board with EES scores over the week.

Many usability issues were identified on the first days of testing, which were then quickly fixed, and the improvements could then be seen in the following days of testing. The EES framework made it easy for the squads to see those improvements being made to their design.

For example, after prototype updates made after day one of testing a new BT Sport buy page, the average effectiveness & satisfaction scores increased, due to minor efficiency issues around understanding of key messages being resolved.

I received a lot of positive feedback from the squads involved. Whilst they found the testing week to be “intense”, they also found it “very collaborative” and “a lot of fun!”. At the end of the testing week, the squads have felt much more confident in the design that they’ve created, as they could see a tangible improvement in the EES scores.

As a user researcher, I feel like I’m saving myself a lot of time with this approach. And the squad are seeing the impact of their efforts more quickly. We’re condensing several rounds of usability testing into one week. Making the analysis process collaborative and immediately after the testing sessions helps to move things forward quickly in an agile way.

Have you found any research methods that allow for rapid prototyping? What are your favourites? Let us know in the comments.

--

--