Good Call: Replacing Street Interviews

How I’ve learned to embrace “phone intercepts” — quick, low-overhead research interviews for the age of social distancing.

Mark H
Meta Research
7 min readSep 17, 2020

--

Before the pandemic, researchers often used “street intercepts” to get immediate feedback or insight. For instance, when colleagues and I wanted to know what people called different parts of our user interface, we went to a nearby park armed with some screen shots and swag, and asked people if they had a few minutes to talk. If they did, we conducted a short interview. A few hours later, we’d gotten a good range of answers, with minimal overhead.

Unfortunately, between lockdowns, social distancing, and masks, it’s unlikely that street intercepts will be possible for quite some time. In addition, street intercepts have other limitations, like biases due to geographic homogeneity.

Can You Talk Now?

A “phone intercept” is a method that re-creates the immediacy and spontaneity of street intercepts while also trying to address some of the biases of street intercepts. Best of all, I can run them from my couch.

Rather than trying to schedule participants for a phone call some time in the future, I ask if I can talk to them now, meaning “in the next hour.” This eliminates the overhead of scheduling while getting people when they are “in the moment.”

I’ve used this strategy to do short (10–20 minute) interviews. It’s easy to execute, provides a wide range of participants (including a number of rural respondents), and has given me great insights.

Method

Below are the steps I’ve taken to prepare and perform phone intercepts. As you’ll see, you don’t need a lot of tools to use this method. The most important requirement is a way to run a short survey or questionnaire within your app or site. You also need to be able to easily turn the survey on and off, and to get the results in near-real time. Most third-party survey tools, like Qualtrics or Usabilla, support this functionality.

Prework: Discussion Guide

I’d already written my discussion guide, which was needed for our internal research review. Some thoughts for the discussion guide:

  • These interviews should be relatively short (10–20 minutes max) so participants think of them as a low-overhead, spur-of-the-moment activity.
  • I approached this as more of a “street intercept over the phone” than a “short in-depth interview.”
  • I included a section about building rapport with the participant. I knew I wouldn’t be able to use body language or mirroring as I would in an in-person or street intercept, nor would I have the time to develop trust like I would in a longer interview.

Prework: Recruiting Survey

Next, I wrote out the survey to recruit the participants and programmed it into our survey tool. In the survey, I made sure to explain a few things:

  • On the first page, I plainly established time expectations, as in “a 10-minute phone call in the next hour.”
  • I made the topic of the research clear.
  • I included an explicit opt-in question to participate. This is part of our research policy, and I like that it gives the participant clear control over whether to join the research. Only after a participant has opted in do I ask for phone numbers.
  • I made clear the expectations of privacy: who would be calling, who would be listening, whether the call would be recorded, and how the information would be used.
  • I noted that not everyone who replied to the survey would be called.
  • I’d also suggest including a “preferred name” question, which helps both in starting the conversation and in making sure you’re talking to the person who answered the survey.
  • I mentioned the research participation agreement (RPA) that they’d receive by email and would need to sign before the interview.

Prework: Introductory Email

I prepared a standard email to send out when I got a survey response. I used this email to introduce myself as the researcher, set expectations about call time, and explain the forthcoming RPA. (I worried that the RPA process would lead to a lot of respondent drop-off, but that turned out not to be the case in practice.) I also included a contact for them to ask additional questions. I used Outlook’s templates here, which saved multiple bits of cutting and pasting.

Getting Started

I usually blocked off a session of 2–3 hours to do the interviews. For context-specific reasons, I did these interviews by myself. I don’t recommend this approach; there was enough to do in addition to conducting the interview that a second person would have been very useful.

One hour before I expected to start the interviews, I turned survey delivery on. It takes our tools some time to start delivering the survey, and it takes recipients a while to respond. I’ve found that an hour is usually long enough for all this to start working.

When I was ready to start, I opened up the survey results site and kept reloading it until I got a response. (There’s a certain “middle schooler waiting to see if their crush will call them back” aspect to this part. Revel in it!)

Once I got a response, I sent the introductory email to the respondent using the template I’d prepared. I filled in the time frame for them to expect the call — an important step in case several respondents stacked up, preventing me from calling them all immediately. However, I always tried to stay within the hour commitment established in the original survey. After I sent the introductory email, I sent the RPA.

I usually had to make a phone call before the interview to see if they’d received the RPA — invariably, they hadn’t seen or signed it. I asked them to review and sign it, and then called them back five minutes later. (I avoided waiting on the line, both because it can feel coercive and because they often have only their phone to fill out the RPA.)

Interview

In the second call, I did the actual interview. Because of the sensitive nature of my interviews, I did not record the calls and took notes by hand. This was a mistake; in the future I’ll either record the call or have a second person take notes (only with consent, of course).

At this point, it’s “rinse and repeat”: get new survey responses and make the calls.

End of Session

I turned off the survey. This ensures that I’m not capturing new responses, which could give users inappropriate expectations. I finalized participant compensation and sent out the gift cards. Then I analyzed all the sweet, sweet data I’d just collected!

Thoughts and Improvements

  • I had a very small respondent pool, so I called everyone who responded. With a larger pool, I’d try to balance respondents across demographics of interest, including age, gender, and usage patterns.
  • This method could work for videoconferencing, but I want to emphasize that a key goal here is to make it as easy as possible for respondents. Don’t underestimate the overhead of installing an app or even of changing out of their pajamas. I’d use very common tools that participants would be likely to have installed already, like Zoom, Skype, or Facebook Messenger.
  • No participant expected to be compensated, despite having been informed in both the survey and the email. One even thought they had to pay me(!) All were pleasantly surprised to be told they were getting a gift card.
  • I suspect that different times of day and different days of the week would yield different response rates. I’m based in Facebook’s London office, but these surveys were done with US participants in the middle of the day US time.
  • Please run the survey only while you can do the live interviews. Trying to use this method to get participants, and then rescheduling, breaks trust. (“You said you’d call in an hour. Now you want to talk in two days?”)
  • I made sure this method was compliant with our policies and broader research ethics. We explicitly got the participants’ permission to contact them, the participants signed an RPA, and we compensated them for their time. There are several places where the participant can opt out with no repercussions, and the participants understood the topic of the interview. I did this with a sensitive population, and my internal research review surfaced no concerns around this method.

Thanks to my colleagues Younghee Jung and Pete Fleming for their help in making this work.

Author: Mark Handel, UX Researcher at Facebook

Illustrator: Drew Bardana

--

--

Mark H
Meta Research

I’m a user researcher at Facebook, working on high-severity, low-prevalence problems. Previously, I worked at Boeing and Lucent/Bell Labs Research.