How NCVO learned about user needs and behaviours

This is the fourth post in my series about NCVO’s technology strategy.

(If you’ve missed the first three, this one may make more sense if you read them first: 1. Why NCVO decided to be radical and brave with its digital and tech plans, 2. Starting to make sense of complexity — mapping out NCVO’s digital ecology and 3. Assumptions, assumptions — how NCVO started talking about what we know (and what we think we know))

My last post left us primed with a knowledge board that set out which assumptions we wanted to test and what we wanted to learn about how users experienced NCVO’s large digital portfolio. Now we were ready to start gathering some data and insights.

This post is all about what we did, how it felt and what we learned about user research.

Looking at our knowledge board, we could see that there were some questions that would be best answered through qualitative user interviews. We wanted a deep understanding of how users felt, and why they used our sites in the ways that they did. But there were also questions that would be best answered through analysis of quantitative data. We wanted to know about the overlap in users of our different sites, for example.

Qualitative user interviews

This was my first experience of qualitative user interviews, although I’d enjoyed some great training. For such a critical project, I wanted to make sure we did the best possible job, so I brought in support from the excellent CAST. Simon I’Anson, then their Head of Product, took me through the process from designing the interview script, through leading the interviews (although I increasingly chipped in), and synthesising the results.

The key things I learned were:

  • Everyone says it, but finding users to interview was hard! We wanted to talk to people that used a range of NCVO’s websites, regularly and recently, and this wasn’t easy! This was a piece of insight about user behaviour in itself, which the data analysis later bore out.
  • Analysing the interviews straight away really helped. We both took notes during the interview. Then directly afterwards we discussed what we’d heard and captured these straight away onto post-it notes as either user needs, behaviours or impressions of NCVO. We used these later in a synthesis session.
  • Having two people in an interview was invaluable. Many times I found that we’d picked up on different insights, or had interpreted something differently. Debating these points helped us both to deepen our understanding as well as ensuring a rounded and less biased interpretation.
  • Videoing the interviews was great for communicating the insights with my colleagues. Simon edited a fantastic video of nuggets from the interviews and this was a really good way to show people what mattered to our users, and what their needs and behaviours were — a hundred times more powerful and memorable than any of our beautifully succinct synthesis could be.
Simon I’Anson, then of CAST, synthesising the user interviews with me. Each post it note has a user need, behaviour or impression of NCVO on it that we captured directly after one of the interviews.

Quantitative data analysis

We had some questions that could only be answered by analysing quantitative data. We wanted to know about the overlap in users of our different sites, for example.

We had a data set that was potentially very rich, but we did not have the capability to it analyse ourselves. This data comes from Click Dimensions — a service we use primarily for email marketing, integrated with our CRM (Microsoft Dynamics). Click Dimensions also tracks some users through our main websites, connecting their web visits with their records if they engage with our emails or if they submitted a Click Dimensions web form (and don’t block tracking). For this project, it meant that we could look at the differences between users from different sizes of organisation, for example, or look at the difference between the behaviour of users that work/volunteer for a NCVO member organisation compared to those that are not members. We describe this in our privacy notice.

So CAST brought data scientist David Kane into the project. I’d worked with Dave for many years when he led NCVO’s quantitative research programme, and he’d since moved to CAST (he is now freelance).

My main take-away was an appreciation of how valuable it would be to have this capability in our team. The main hurdles and down sides we dealt with were:

  • Data protection concerns — as Dave wasn’t an employee, and wasn’t using our infrastructure, we spent time carefully thinking through the data protection implications and putting in place appropriate measures backed up by a contract.
  • Languages and tools — Dave used python to do the analysis, but this wasn’t a skills set we had in the team. So even with the scripts that Dave had written, we couldn’t build on his great analysis.

We are now looking at developing python skills in the team, as well as learning more about Power BI, which is a tool we already have available to us. I’ve wanted to do this ever since we worked through the data maturity framework produced by Datakind UK and Data Orchard in 2017 so I’m really excited that we’re finally making some progress.

Next week I’ll share what we learned — from the qualitative interviews and quantitative analysis — and what our knowledge board looked like after our research phase.

PS

If you’ve made it this far, I’d really like to thank everyone who has sent me positive feedback on twitter, liked, retweeted and clapped — it has really motivated me to keep carving out time to share our journey.