Understanding our research using affinity diagramming

Emily MacLoud
The Digital Fund
Published in
7 min readMay 5, 2020

Law Centres Network, like many other charities, have adapted their plans to cope with the current crisis. Thanks to the enthusiasm and patience of the LCN team and the generosity and dedication of Law Centre staff around the country, our digital transformation project, funded by The National Lottery Community Fund, has continued smoothly.

However, at a recent workshop, which aimed to organise our findings from our research activities, our best-laid plans started to crumble. Below we describe how we adapted, reflect on each step and synthesise the lessons we learnt along the way.

The context

The digital transformation project is an ambitious initiative that aims to make law centres digital by default. As part of the project, we had conducted nearly 60 interviews across the UK with Law Centre staff when COVID-19 entered our lives. We had progressed from semi-structured, open-ended interviews to observations of staff performing key tasks, and we were about to venture into new territory when lockdown was imposed. Despite a few hiccups, we continued our research, albeit digitally. In mid-April we decided to pause our research activities to reflect on what we had discovered and where there were still gaps in our knowledge. We decided to run an affinity diagramming session through MIRO to organise our intel and engage key stakeholders.

https://www.nngroup.com/articles/affinity-diagram/

The plan

Our original plan consisted of five steps:

Step 1: Absorbing findings

We planned to take around 30 minutes for all participants to individually read through the quotes, observations, goals and pain points (collectively “findings”). These findings had been plucked from the interviews and transferred onto “stickies”. We invited participants to add any findings, where necessary. We hoped this would give key stakeholders the opportunity to read and engage with the findings in their own time.

Step 2: First sort

We then planned to take another 30 minutes to sort these findings into stages of service delivery, where applicable. We determined this would be the best approach because it allowed us to organise the findings (of which there were about 300) into easier to manage categories.

Step 3: Second sort

We then planned to take another 30 minutes to group these stages into even higher-level categories. We hoped this would provide a way for us to collectively define our jargon.

Step 4: Visualise the end-to-end experience

We then wanted to map what service delivery looked like.

Step 5: Determine priorities

And finally, we wanted to end the session agreeing on what problems we should focus on.

What happened

What actually transpired were the following 7 steps.

Step 1. Absorbing information

Following a brief introduction, we gave all the participants 30 minutes to absorb the findings. The findings had been organised ahead of time. Each yellow stickie represented one of these findings and was arranged according to the Law Centre. We added a comment box to each Law Centre that pointed out the researchers who had been to that Law Centre so that if someone had a question, they could ask that researcher. Participants added notes they felt were missing, jotted down design ideas and highlighted when they felt our research was lacking.

Step 1: Absorbing Information. Each yellow stickie represented a unique quote, goal, observation or pain point.

Why was this step effective?

  • The findings were presented in an easily digestible way
  • We gave the participants the opportunity to take notes so they could process the findings in their own way

Step 2: First Sort

We then moved on to the first sort of the day. This step did not go according to plan. I had suggested that we start organising findings under tasks and sort these into chronological order. However, participants felt that what they were identifying were more intimate relationships between the findings. Grouping these findings into tasks, at this stage, just didn’t feel right.

As a result, we started to produce green stickies that represented intimate themes rather than tasks. Some were predictable (“signposting”, “client digital access” and “data privacy”). Others were more unexpected (“fear”, “data inconsistency” and “clarity about criteria”).

Why was this step ineffective?

  • We didn’t account for the way participants would process the information in the first step
  • Despite highlighting the purpose of the session and what our plans were, it was not clear that we wanted to map the findings in chronological order

Step 3: Second Sort

As the number of these green stickies grew, it became difficult to keep track of them. As a result, we started to group these green stickies together. We first grouped them into undefined clusters and then named the clusters using purple stickies.

Step 3: Second sort. Each purple stickie represented a high-level problem.

Why was this step effective?

  • Everyone was able to vocalise their thoughts and this provided us with a way to collectively define our jargon
  • We highlighted to the team that it was okay to disagree because we were effectively defining the boundaries of our problems
  • We spent sufficient time thinking about what we meant by some of the names to ensure our thinking was aligned (for example, “admin” was changed to “back-office” and “systems” was changed to “procedures”)

Step 4: Pause

We then took a moment to pause and reflect on our next steps. We decided not to proceed with our original plan because we did not want to break up our organically formed categories into artificial stages. Many of the problems we identified represented systemic problems that affected all stages of service delivery.

To decide how to proceed, we referred to our objectives for the session and the vision for the project. Each of the participants offered a few suggestions for a way forward:

A) Identify the gaps in our knowledge (originally planned)

B) Identify the broad themes that were emerging

C) Identify the findings that could be dealt with by other parts of the organisation

D) Identify how these clusters were connected and to what extent they were connected

Why was this step effective?

  • Everyone engaged with the process
  • Pausing allowed us to reflect on whether the original plan would produce the results we wanted for the session

Step 5: Identify your front stage / backstage findings

We decided to identify the broad themes that were emerging and settled on ‘Client’ and ‘Internal’. These terms represent what would be considered ‘front stage’ and ‘backstage’ in a service blueprint.

Step 5: Identifying broad themes that were emerging.

Why was this step effective?

  • It allowed us to start structuring our inherently messy data and identify how the service could be used by the client

Step 6: How can this research feed into other initiatives?

We parked those findings marked ‘Client’ and focused on addressing those findings marked ‘Internal’. We used MIRO’s tag feature and tagged each stickie accordingly (for us this included equipment, training, customisation, resourcing).

Why was this step effective?

  • Talking about other initiatives in the organisation reminded us to think of the bigger picture and allowed us to identify how this research could inform those initiatives

Step 7: Next steps and Feedback

We closed the session by discussing where we felt there may be gaps in our knowledge and how we might identify the connections between these clusters (we’re going to try out KUMU). The next sessions will involve mapping how these themes are interconnected and how we should approach tackling these problems.

Why was this step effective?

  • Everyone left being clear about what the next steps were

Lessons learnt

Create enough space to allow key stakeholders to absorb information

This was, in fact, our second iteration of a previous session we had run that was aimed at defining the problems we had encountered throughout our research. In that session we had not created enough space to understand the pain points and challenges that each Law Centre faced. That’s why in this session we gave participants sufficient time to really absorb the intel from the findings we had collected.

Reflect and organise intel on a regular basis

After each interview, we made sure to leave a 30 minute window so that we could discuss together what had just happened, make some additional notes and compile our thoughts while everything was still fresh. However, what we failed to do was review this material on a regular basis. In retrospect, we should have paused and run a session like the one above after, say, every 5–10 interviews.

Pause, reflect, align, go

At Step 4, I froze. I wasn’t clear on what we should do next. But by being open about not knowing which direction the workshop should go in, we were able to generate a lot of ideas and eventually decided on a sensible direction.

Take a principled approach rather than a strict one

There are so many great resources out there that sometimes it becomes an overwhelming task just deciding which activity to run. By taking a critical approach and thinking hard about the context you’re operating in, who is participating and the problem at hand, the way forward will become a little more clear.

It’s amazing to have a team that is familiar with the process and appreciate its value

It is a pleasure to facilitate a group who believe that design and development needs to be driven by data and user needs. It’s a bonus if the team are comfortable with one another, and feel able to suggest alternative solutions, without fear that their suggestion will be side-lined.

We would like to express our gratitude to Law Centres across the country for the tireless work you do and for welcoming us into your Centres. For more information about Law Centres, please visit our website https://www.lawcentres.org.uk/about-law-centres.

--

--

Emily MacLoud
The Digital Fund

Making sense of the messiness: reflections about legal design and other things