The “What” and “Why” of User Interviews and Usability Testing
Sep 3, 2018 · 2 min read
User Interviews
- Creating a “Topic Map” — we first brainstorm as a team what possible topics effect or intersect with our main topic of research to build better questions for our user interviews (so we come away with better and more insightful data!)
- Next we create a script to guide the user interviews. We make sure to introduce ourselves and the other members of our team, and make sure the interviewee is comfortable and focused. We then pose our questions from most broad and least personal, to more specific and/or personal questions. This method of ordering the questions tends to help interviewees open up and tell us personal stories, which often make for the most insightful data.
- Next, we recruit interviewees from our target demographic to make sure the data we collect is from relevant parties.
- Next, we conduct our interviews in groups of 2 (one person to focus on the interviewee and questions, and one person to record and take notes).
- After each interview, we immediately write down 3 takeaways while the memory is fresh.
- After all interviews are complete, we listen to recordings and fix up our notes to hand-off or begin synthesis of data.
Usability Testing
- We begin usability testing with a standardized script. We do this so our tests are controlled and easily replicated (important aspects of any test, scientific or not). To begin this script we always explain who we are and introduce our teammates, remind the user that we are testing the app — not them, make the user comfortable, and ask permission to record. Recording tests allow us to share our work with outside stakeholders and within our team to synthesize findings. We also include a series of demographic questions and task scenarios based on our target persona and what we are testing.
- We then create and send out a screener survey to make sure we recruit users within our target demographic.
- We then set appointments with more users than necessary, in anticipation of the inevitable “no show” or cancellation.
- Finally, it’s time to conduct the tests! We again want to conduct these tests in groups of 2, so one person can focus on administering the test and the other can focus on recording the results. Each task scenario should have a definition of both success and failure metrics before testing begins.
- After each test, we write down 3 takeaways while the memory is fresh.
- Finally, we compile all the test data to put forth a set of official design recommendations for the next iteration.