Honing your user research craft: Doing a critique through a “Don’t Leave Data on the Table” class

Marianne Berkovich
8 min readDec 1, 2016


If you are someone who conducts user interviews, you’ve probably developed a personal style for how you run a session. When you do something repeatedly, it becomes second nature, and you stop thinking critically about what you’re doing. Bad habits emerge, and best practices erode over time. Haven’t we all unintentionally raced through the intro spiel with a participant because WE’VE heard it a million times?

Unlike our design partners, user experience researchers do not have a regular practice of critiques — observing each other’s sessions and offering suggestions on how to improve. You probably have product team members watching your sessions, but they are focused on the participant, not you — and not on bringing a critical eye to your research technique. When they ask, “How’d it go?” they are interested in hearing what insights and findings you uncovered about the product. If someone also asks you that question about whether the research “worked,” consider yourself lucky; most of us rarely take time to reflect and refine our own techniques and approaches or discuss this with others. As user researchers, we need opportunities to get feedback on our techniques and approach in order to continually improve our craft.


To avoid the user researcher rut and create a safe place for user researchers to give feedback to each other about their session moderation practices and technique, I recruited my fellow user researchers Martin Ortlieb and Elizabeth Baylor, to create a class called “Don’t Leave Data on the Table” for Google’s UX University 2015 (an annual gathering where Google’s user experience professionals learn from one another). The class was so popular and oversubscribed that we have repeated it several times thereafter.

This story describes the key principles of running this class.

  1. Create a safe space
  2. Conduct the critique
  3. Reflect and make an action plan


When we invited class members, we were transparent about what would happen in the class — that we would form small groups to watch a video of each class member moderating a user interview. We asked class members to send us links to their videos ahead of time so we could preview them and pick the “juiciest” one for the session, one that seemed to have the most interesting moments.

When class members arrived in the classroom, we made the environment playful and friendly — playing music and decorating the space with a special surprise.

We knew that people would be uneasy about watching themselves on video, so we did an ice breaker that also got people feeling good from the get go. The ice breaker did double duty by modeling giving positive feedback and building on the suggestions of others — both of which would be important for good group critique. We used an Amplification exercise that Denise Brosseau introduced us to: in pairs or threes, one person says something they are good at — it could be something silly like “I am great at picking the right size Tupperware for leftovers” or ‘I’m great at changing the oil in my car myself” Then a partner amplifies this trait, saying something that might sound like, “Marianne is so great at changing the oil in her car that she can do it in under 5 minutes — take that Jiffy Lube. She is so fast at it that she has NASCAR pit crews asking her for tips.” You get the idea — the amplification doesn’t have to be based in fact, it only needs to take a small trait and amplify it to something enormous. And it feels good.

As we started the class, we reminded the class members that we are all in this together: “You are all good researchers, and we are all here to learn together — none of what we’ll be doing today is about our flaws, but about improving ourselves. We know that watching videos of yourself is hard, and we’ll be making ourselves vulnerable in order to learn.” We were lucky at Google that there was a strong culture of respect and trust already, so we did not need to do much to bring that spirit to the class. If you try this class on your own, we suggest doing it with a group of people with an existing sense of trust, whether this is coworkers or people in your own network.


We formed teams of 3 or 4 class members per instructor. Each student was allotted 15 minutes. We started watching 16 minutes into each session so it would be after the introduction and technical set up, and in the heart of the content of the session. The student introduced the video and context of the study and then we hit play.

We borrowed a concept from the Kaizen philosophy of continuous improvement, where each worker on the production line is empowered to stop production when they spot a defect by pulling the Andon cord. The Andon cord is a device used to notify management, maintenance, and other workers of a quality or process problem. We started watching the video together, and everyone was invited to pull our version of the Andon cord when they spotted something that was deserving of feedback. Our version added an air of whimsy to the class (and we’d like to keep it a surprise until you take our class!). You can get creative with your own lighthearted version that creates a visual or auditory cue.

We encouraged the other group members to go with their hunches and to not be shy about pulling the cord based on just an inkling of an observation — when a little something just didn’t sit right with them. When we stopped the video to make an observation, we invited the person whose video it was to “rent the idea” — not necessarily “buy it.” This meant listening to what people said, and evaluating later. It’s possible that the group picked up on something that was a fluke or that the class member was having a bad day, but “renting the idea” was an opportunity for the person to consider whether what was noticed was part of a larger pattern.

We invited the person whose video it was to critique themselves first since they have the most context about what was happening, and are the most familiar with their own actions and behaviors.

We drew inspiration about how to give feedback from Patrick Cox’s Unwritten Rules of a Design Critique. In particular, we thought the following rules apply well to a user research critique:

  • Use comparisons between class members sparingly — focus on the current session
  • Be specific
  • Ask why
  • Offer suggestions and alternatives
  • Be positive — stop the video to comment on things that work well too


We wanted to make sure that class members would walk away with some actionable insights. We gave them a note-taking sheet with the SKS process (Stop, Keep doing, Start) framework:

  • Things to stop doing
  • Things to continue doing
  • Things to start doing

Each group member was responsible for taking notes for themselves. We believe that each person should be responsible for their own learning and is the best judge of what was most applicable to them and what they could best absorb right now. We also found that people learned as much from other people’s videos as they did from their own, as they saw themselves in the missteps and missed opportunities of others; many people took as many notes during other people’s sessions as they did their own.

The facilitators had watched the videos beforehand and were prepared with observations to share, in case the person or the group members didn’t bring these up. This helped ensure that each person came away with substantive insights.

At the end of class, we gave class members time to reflect on what they heard and decide which ideas they wanted to buy and how they were going to make those changes.


After running several sessions of the class, we started to notice patterns that have almost become “folk wisdom” within the user research community, but can thwart creating the type of atmosphere where deep insights emerge. Let’s take a look at three of them.

1. “We’re not testing you”

The ostensible goal of saying this is to put the participant at ease and reassure them that any issues they encounter are with the software, not with themselves. However, by bringing up testing and then reinforcing that with subtle (and sometimes not so subtle) cues that this is a test, our words undermine our intentions. For example, later in the session, you may find yourself saying, “Can you see a way to do X here?” implying that there is a way to do X and the participant needs to find it to pass the exam, or showing excitement when the participant does the “right thing” to progress in a task. In our class, we’ve watched videos of participants tensing up and then guessing different actions until they found the right one that won the approval of researcher. This testing atmosphere is generally not what we want our participants to experience, so we need to pay attention to the messages we are sending both with our words and body language.

We suggest focusing on what the experience is rather than what it isn’t. Rather than introducing the idea of a test, frame the session as a conversation in which the researcher and the participant will be looking at or doing some things. Another solution is to remind the participant of why THEY were selected for the study — because of their unique experience which is exactly what we’re interested in.

2. “We’d like your feedback on…”

This one is related to the testing and evaluative atmosphere that you can unintentionally create. The word “feedback” implies that you are asking the user to make an evaluation. Sometimes this is exactly what you are looking for, but most of the time, you are much more interested in understanding and observing how participants interact with a product or service. You can ask for a summary assessment at the end of the session, but opening the session saying that we are looking for feedback, is begging the participant to put on their ‘Evaluator’ hat rather than their ‘User Using a Product’ hat.

3. “What are your thoughts on this?”

Open ended questions are great, but there is too much of a good thing. If your goal is to get a sense of what’s most important to someone, then by all means ask this very open ended question, but beware the trap of the lazy question. Beware of asking this question because you couldn’t come up with something better to ask; it puts the onus on the participant to try and guess what you are interested in hearing about. For example, are you interested in their understanding of what happened, how useful this feature is in their workflow, what does the feature not do that they need it to, or something else entirely.


We’ll be teaching this class as workshop at the upcoming CHI conference in May 2017 called “UX Interviewing: Personalized Coaching to Avoid Leaving Data on the Table.” We’d love to see you there.

If you can’t make it, we’d love to hear from you on how you hone your craft, and what you think of this approach. Leave a comment below or contact us at hello@marianneberkovich.com.


Huge thank you to Elizabeth Elliott Baylor on her invaluable feedback on this story!



Marianne Berkovich

Qualitative user experience researcher exploring the intersection of social entrepreneurship, technology, and design thinking. User research at Glooko.