Measuring user experiences — an interview with Toby Yakubu-Sam
Measuring user experiences allows us to make behavioural data visible so that we can identify moments of inertia, friction and discomfort. In order to achieve this, instruments of measurement need to be context-sensitive, reusable and durable so that they can measure the evolution of experiences across time and space.
Recently Toby Yakubu-Sam, User Research Manager within the Design team at BT, created a framework for measuring user experiences, grounded in the principles of inclusive design. Below he shares how he did it.
- So firstly, tell us about your time at BT?
I’m Toby. I’ve been at BT for roughly 2.5 years. Initially when I joined BT there were only 3 researchers, so we operated as a mini-agency. As the requests grew the team grew from 3 people to a team of 8 currently. During this time BT went through a digital transformation, which prioritised putting the user at the heart of our business. We were keen to use this opportunity to democratise user research within BT, so we began working closely with different squads to uncover the range of questions that they wanted to explore.
2. What does user research mean to you?
A user researcher is your companion. We bridge the gap between what our colleagues want to know and what our research participants want to share with us, in the process creating connections between them. At the outset you don’t really know what you’re going to get out of the research, as participants can lead you down unexpected pathways. Conducting research when there’s uncertainty around a problem is like stepping into the unknown and using generative research methods helps to increase our understanding. When experimenting with different ideas evaluative research methods and measuring our experiences ensures we have an objective view of our journeys that we can iterate on.
3. You recently created a framework for measuring evaluative research — why did you do that?
We wanted to give our colleagues the tools and training to make it easier for them to make product design decisions using evaluative research data. We wanted to help them to prioritise which user needs could be most effectively investigated and what type of data provided the strongest evidence.
Previously we had captured both behavioural and linguistic data but we repeatedly observed users struggling with experiences, while simultaneously expressing satisfaction with the experience. In order to get a better understanding of what was happening, we created a framework designed to be used with evaluative research methods. We used the 3 classic metrics of effectiveness, efficiency and satisfaction.
- Effectiveness to measure task completion
- Efficiency to measure friction
- Satisfaction to measure ease of use.
These measures encouraged us to be more experimental and forced us to be more objective, in the process increasing our confidence in the data that became visible.
4. How did you go about creating the framework?
It was a big process and it wasn’t easy doing it during lockdown. To start with, I really wanted to conduct user research with our team of user researchers to understand how they worked. We had a workshop session on Miro where we mapped out the processes that we used. We discussed the differences and similarities that became visible during this mapping process.
As a result of these conversations, we identified a gap — we didn’t have a defined planning ceremony for research. We addressed this by designing a new ceremony which allowed us to help squads to agree a focus for their research from the broad starting point of user needs, to designing a research question, hypothesis and task.
In order to trial the research planning ceremony we carried out live research sessions where squads collaborated with user researchers using a Miro board. This way of working allowed squads to be part of creating the research report, which resulted in a much leaner process, where we focussed on the specific hypotheses that had been created.
Once we had designed the ceremony, we ran a series of further trials with different squads to understand what worked well and how we could refine the ceremony further. The first ceremony trial took about 1.5 hours and through iteration we reduced it to 45 minutes, at which point we got more teams on board.
5. What went well with this process?
At BT we have a strong focus on inclusivity and democratisation. We believe that user research is a team sport. We don’t want to carry out research on our own and then broadcast it to our colleagues. We want them to be part of our process so that they can get a better understanding of how our users interact with us. As a result of this we had a high degree of engagement and interest from our colleagues throughout.
The structure of the ceremony allowed squads to examine different points in the journey which allowed them to engage in an initial heuristic analysis, which they found valuable.
This approach gave squads an immediate, at a glance view of whether their hypotheses were supported, whereas before they had to wait for the report. That is a really powerful way to give squads confidence.
6. What challenges did you encounter along the way?
Whenever you create a framework you have to take into account that everyone is used to doing it a different way. During the process of engaging with the different squads there was a mismatch in language between how we described ideas and how they did. We had to create a shared understanding of what a user need is — which was led by John, one of our service designers.
7. What does the future look like for user research at BT?
We’d like to measure the live experience using EES metrics. This would allow us to link usability measures to other quantitative metrics. We’d like to develop the ways that we work by providing a clinic where squads who have written hypotheses can come to us and we can engage in an element of coaching.
In general user researchers are going to have more influence outside the realm of what user research is today. Within BT there are innovation hubs where new ideas are explored. User researchers could really be the driving force in those areas.
What are your experiences with measuring user research? Let us know in the comments below.