The map and the compass: designing a path for learning to do user research
This is part 3 of a series of posts exploring how to teach people to do their own design research. You can check out the first post here.
It’s Week 5(ish) of the pilot Design Research training program. I’ve been educating the two designers I’m working with on: what qualitative research is and why do it, when to do it, what methods are available, and how to decide what users to include in research. In the coming weeks, we’ll be designing a study together, actually doing the research, and then analyzing and making sense of the data.
A few observations so far
The response to the content I am showing has been positive. As the sessions progress, we are moving from theory to practice; earlier sessions included much more show and tell, with slides and discussion, and recent sessions are getting more into collaborative work, starting with a ‘blank sheet’ instead of nicely polished diagrams and slides.
When starting a session with an exercise or a discussion, I’ve noticed that I am more comfortable than I thought I would be with the unknown, the things I can’t control. It means I don’t have to rely too heavily on organized presentation material (the known, things I can control) in order to deliver learnings.
Rapport is building over time, and we are getting to know a lot about each other. A really wonderful exercise, for me personally, involved plotting biases that we should look out for when running the research. It was a moment to be totally honest about ourselves and our concerns. I’ve included a screen-grab of this below (for the purpose of sharing publicly, ‘Bruce’ is a pseudonym). We will return to these when designing and executing our research project, to keep an eye on them.
Valuable learnings from this pilot, to apply to the design of the training program
1.Program participants without much previous exposure to research need to see real examples early on, to understand what they will get at the end. In this training, I’m trying to demystify user research. But I’ve avoided showing people what every stage of research looks like through examples because I don’t want those examples to be the things people latch on to, influencing their own study. I’m relying on the ‘Aha!” moment at the end of our time together, when hopefully everything will make sense. Seeing is believing, but relying on the seeing happening at the end doesn’t feel like a good strategy for participant engagement.
2. Each session should begin with a recap on what we have done, the goal of today’s session, and what to expect in the coming weeks. At the very start of the program, I explained to the participants what we’ll be doing, when, and what to expect from each session. But I haven’t done a good job of recapping that. The program participants have a life outside of my training; when we start a new session, it’s hard for them to context-switch out from their day to day work.
One participant said that “each session is like using a compass, but I need the map to make sense of where we are and where we’re headed”.
3.One on one teaching is the right approach, but students should have regular touch-points with one-another. Participants want to know what other people on the course, their peers, are working on. They want to understand the potential landscape of research, for greater confidence on their own journey. This will also help address point 1, above. Once the pilot is over and the ‘real course’ begins, I will shape the program so that we start together, as a group — and then meet up at strategic points along the way to share what we’re working on and reflect on our experiences.
4. When I run this course at scale, I need to be allocating around 10 weeks* for completion. In a classic case of planning fallacy, it will take longer to train people than I had originally expected. We’re not working in a vacuum; the research needs we are prioritizing, methods we are selecting and users we are recruiting are for a real thing. Stakeholders in their team need to be consulted. Mock-ups may need to be created. The real world is messy.
*perhaps planning fallacy is still coming into play here, maybe it’s more like 12 weeks. Or 15. I’ll let you know by the end!
I will continue to learn and reflect on the process. I’m excited, maybe a bit nervous, about actually doing the research — because I know how much energy and dedication it takes to do well, and I am going to be leaning on the participant to do a lot of the heavy-lifting. I want to set them up for success, but allow them room to fail and learn, too.
Georgia is a Design Researcher at ConsenSys, the blockchain innovation lab and start up incubator. Find out more about the ConsenSys Design Circle and the web 3.0 solutions we’re designing here: https://consensys.design/ and follow us on Twitter.