How we do user research analysis at the Heritage Fund
In this post I’m going to talk about how we analyse our user research, but first a bit of context for who I am and where this is all happening.
I lead on user research at the National Lottery Heritage Fund. I work as part of our Digital Service Design team and we’ve been in place for around a year. We’re the first Service Design team in the organisation and we’re distributed across the UK, which can present some challenges to the usual user research ways of working.
Since I joined the National Lottery Heritage Fund, I’ve been working to build our user research practices so that the work we do is robust, but also allows us to deliver a fairly continuous programme of user research across the UK.
The National Lottery Heritage Fund is a nationwide organisation which means it’s really important that we include people from all across the UK in our work, which isn’t easy when you’re a small distributed team with one user researcher. To address this, a lot of our research over the last year has been done either completely or partially remotely.
We’ve saved our in person research from activities where the team can get the most value from the travel, often this has meant where we can link multiple research sessions together in one day or when we’ve done whole days in research labs.
Between April 2019 and April 2020 we have run 205 primary research sessions with applicants, grantees, members of staff and people who’ve never applied for our funding before. Of those sessions, around 200 have been conducted remotely and almost all of them have been analysed in the way I talk about in this blog.
Analysing research together
We’re using Miro to facilitate our affinity sorting as part of our research analysis.
If we weren’t such a distributed team, this would likely happen in a room with a bunch of post-it notes. But remote working is our normal and has been for a while, so it’s really rare that we work in that way.
If you aren’t familiar with Miro they have some specific terms which refer to certain tools in the product.
A ‘board’ is the file that you work in — our team has specific boards for specific pieces of work or focus.
We have a board which acts as a wall — it’s got insights, quotes and prototype screenshots on it and gives us a shared space where people can browse & add things.
A ‘frame’ is what we use to keep each round of research separate. Each frame is shown with a grey border & drop shadow.
You can send people a link to a specific frame so that they open the board in the right place.
A ‘post-it’ is a virtual post-it or sticky note.
Tags or labels are the coloured squares at the bottom of a post-it.
Preparing for the affinity sorting
The goal of affinity sorting as a tool for research analysis is to pull out the themes from a piece of research. These themes can then become insights, pain points or areas that need further research depending on the evidence from research. Insights should include what, why and so what. If they don’t have these three things, they’re less useful for the team to be able to respond to.
When explaining it to stakeholders, I tend to explain it like a game of snap — put the things that are similar next to each other and give it a title. It doesn’t have to stay in the first place it’s put so you don’t need to worry about it being right.
It can be a bit intimidating for people outside of our team to actively take part when 10 people are whizzing post-its about the screen, but even if people observe and join in the discussion at the later stages, it’s still really valuable for us. Over time people become more confident about what’s happening and can join in.
Before we can do an affinity analysis session, we need to have post-its to sort and because we’re remote 99% of the time, that takes a bit of prep work.
When we do research, we take notes in shared Word documents — one person in the team will lead on this and other observers will add any extra points in at the end. This gives us a pretty cohesive document to come back to for analysis.
In an analysis session, we go through the notes as a team and highlight the key points. It’s important for the affinity sort that people know what to pull out. We discuss each highlight — is it a pain point, context, an activity that people are doing or is it something else?
Pulling out these notes is an extra step that we wouldn’t do if we were together and had observed the research in person, but it means we have more informed conversations about what happened in research as we’ve all read the notes. It means as a team, we’ve all engaged with the research before we need to think about what the next steps are based on the evidence from the research.
During the discussion, we transfer the key points to Miro to affinity sort. A second person to help to do this while someone else facilitates the discussion can be great, but isn’t necessary — split screening works depending on the size of the screen!
The affinity sorting is often done in a separate session, otherwise it can be a bit intense as one big session.
Doing the affinity sort
After we’ve pulled out key notes, we start to do the affinity sort. At the point, most of the team are pretty familiar with the exercise but to begin with we did this quite slowly. Now we tend to spend around 15–20 minutes on it, but it varies by the number of sessions, the variation in experience and the topic of the research.
We found it was important to give people time to become familiar with how Miro works — it’s performance can vary by device. This is particularly key if a new stakeholder is joining one of these sessions.
Once we reach the stage in the image where our themes are pretty well defined and we have no or only a few post-its that aren’t in a theme, we spend 5–10 minutes checking through the themes individually. If there are any notes that don’t fit with the theme they’re in, people tag them as contentious.
We can then go through each theme and chat about them & any contentious post-its. From here, we can pull out insights & document the research in a sharable format.
We tend to have a space for people’s questions or ideas — we don’t want to lose them, but in research analysis isn’t the time to discuss potential solutions & can derail the session a bit.
Features that help us
We use features of Miro to help us when we come back to the research to document it.
Each frame is named with the round or focus of the research and the date so we can use the board for synthesis and trace information back to specific sessions if we need to.
Each post-it is tagged with the participant number and their role if they’re an internal participant. This helps us trace information back to a specific interview or session if we need more context for a note.
We also tag if a post-it has been duplicated so is in two themes, usually this is when there are two points in a quote and it doesn’t make sense to split them.
The contentious tag is used when someone thinks a post-it is in the wrong place on the board, it tends not to stay on a post-it for very long because we chat through these ones to move them to the best theme for them.
Each frame has a list of research sessions with links to the notes documents. This makes it easier for guests to see where the information came from and helps us if we need more context for a particular post-it.
Concerns with Miro
In the last year, this has been working pretty well, but we have found some problems to be aware of before you adopt this approach.
Miro is difficult to use on some devices or operating systems. Our stakeholders and some members of the teams are using Surfaces so it’s hard for them to actively participate in this. It’s been getting harder and harder over the last year.
The boards get slow if you hoard frames like we’ve been doing — it shouldn’t be used as an archive and we’re working to move away from that. While we work on a solution, we have a research board where we work and run these sessions, and a separate research archive where we store the work.
It’s not completely accessible. It’s difficult to use with magnification and I can’t imagine it’s usable for voice recognition or screen reader users. We use colour & placement to indicate sub themes. The colour contrast on some of the tags is very poor with limited colours and white text. At the moment this isn’t actively creating a problem, but we’re very aware of it and want to do better.
There are different navigating modes for mouse or trackpad which initially make it harder to use. If you’re trying to use a mouse but miro has incorrectly detected a trackpad, the board expects gestures which aren’t possible.
I’m absolutely not saying that this is the perfect way to do research analysis as a remote team, but it’s been working for us. If you have any suggestions or ideas about how we could improve our practice, I’d love to hear them!