A Step-by-Step Guide to Facilitating Qualitative Data Analysis In Remote Teams

Emily Williams
Marketade
Published in
8 min readMay 15, 2020
Photo by Kaleidico on Unsplash

Recently, I led a study in which we recruited 40 users to participate in a remote evaluation of existing health websites. A significant amount of planning and preparation went into the project before the launch. As you can imagine, running a usability study with 40 users generates an enormous amount of data. Our agency, Marketade, is a fully remote organization. So how did we work together to analyze what was easily thousands of pages of data?

In this particular project, we decided to turn our research as a team sport approach on ourselves. Since a good deal of us are transitioning to remote work during this pandemic, I am here to offer a glimpse into our life as remote workers and walk through facilitating massive amounts of textual data with remote teammates.

Step #1: Decide how you’ll code your interviews

This step is fairly loaded, so it’s important to get it right. There are several ways to code. Our team decided to use a deductive, or a priori, coding approach instead of an inductive, or in vivo, approach. I developed a code book before the sessions were conducted that we would use and refine during analysis. If you’re trying to decide which approach to use, asking yourself a few simple questions can help you determine which approach to take:

  1. Do you have a good idea of the problem space?
  2. Are you looking at something specific, or are you looking for general feedback across a wide array of constructs?
  3. Do you have any existing data to guide you on where users might be experiencing frustration?
  4. Are there specific business goals you know you need to accomplish?
  5. Are you on a tight timeline?

For this project, we had specific parameters (namely user barriers and facilitators within a defined process on this website) with which we were working, so we opted for deductive coding. If you find that your scope is fairly narrow, and you have a pretty ambitious deadline, deductive coding is a good option.

If, however, you’re in a very generative and exploratory phase, I recommend that you strongly consider inductive coding. It’s more labor intensive, but you get a rich, in depth look at your data. Examining your data with no preconceived notions of what you might find is a compelling way to really familiarise yourself with the users perspectives.

It’s also important to note that these two styles are not mutually exclusive — I often find myself doing some kind of inductive coding even when I have developed a code book. There’s a difference between rigor and rigidity — it’s important to be open to listening to the data and adjusting your codes when you encounter data that challenges your assumptions.

Once you’ve figured out your process, have a meeting with researchers on your team to ensure you’re all on the same page about the process you are using and how you’ll implement it. This was an important learning for us. In our project, we initially tried to transcribe and code these interviews in a qualitative coding software program. Long story short, it was a nightmare. We ended up re-watching the videos and coding the recordings using Excel. For video heavy data where clips illustrate what you’re trying to communicate, sometimes it’s best to use a guerrilla approach.

Step #2: Conduct the research

If you’re not familiar with UX methodology, I recommend you check out this article by Nielsen Norman group. The methodological specifics are beyond the scope of this article, but I want to point out that the mechanics of conducting the research impacts the division of labor later on. In our project, we conducted remote sessions with users and we had two researchers on almost every call. In reflecting back on our efforts, we could have been more intentional about assigning a note taker to code based on the code book. Coding the sessions live gives a useful baseline to compare when we do our final analysis.

In effort to streamline the analysis process, I recommend trying to have a note taker on the call and then having a separate researcher code the interview after it has occurred. Doing so yields some insights into how others interpret what we see, helps us keep our biases in check, and facilitates conversations about our observations later in the analysis process.

Step #3: Divergence: The coding process

By this point, you have recordings of all your interviews. You might even have transcripts. So what do you do with them?

First, make a plan on how to divide labor. Who will code which interviews? As I mentioned earlier, it’s helpful to have other people code interviews who didn’t serve as note-takers. If you didn’t have that process set up in the beginning, don’t fret! In absence of that, my preferred strategy is to assign a mix between having people code their own and having them code others’. I think there’s a lot to be gained from coding your own interviews as you were witness to a lot of subtleties that a fresh pair of eyes might miss. Conversely, when you examine another researchers’ interview, you are the fresh pair of eyes, and inevitably, you’ll recognize something that the interviewer missed.

Once you set up the division of labor, it’s time to immerse yourself in the data. If you’ve decided on deductive coding, use that as a starting point. Get familiar with your code book.

Next, get familiar with your participants. Start by trying to step into a state of curiosity or as the Buddhists call it, the I don’t know mind. I find it helpful to read a few transcripts and just repeat these questions in my mind as I’m reading them:

  1. What are my users telling me?
  2. What can I learn from them?
  3. Where are they showing me what matters to them?

Doing this a few times has been a helpful practice in staying grounded and curious to my user feedback. Once I’ve done that, I feel ready to start coding.

The process of coding is always a tricky one to articulate because we all have our own unique experiences that influence how we interpret what people say. This is why I advocate doing this part individually. We have unconscious thought patterns and biases that ultimately seep into our representations of the data. Starting off with divergence helps us get familiar with our data as researchers, and formulate our own perceptions from the data.

A quick word on the mechanics of how to do all of this. My personal preference for coding is to use an Excel spreadsheet, or when available, qualitative software coding. If I’m using Excel, I like to set up my codes by rows, and label columns for notes, quotes and timestamps. Some of my colleagues like Word, and some like to put pen to paper. You might have to try different methods to find out what works for you.

Step #4: Convergence and collaboration

You’ve now spent a significant amount of time disassembling this data into its smallest components. Time to put it together. But how do you do that remotely?

So glad you asked!

We tried a new approach on our team using Miro. I organized a 2 hour session and instructed each researcher to bring their notes and codes, however they compiled them, from these interviews. Within Miro, I went with my tried and true color coded sticky process. I’m a visual person, and color coding is a very helpful way for me to organize data. I started with a purple sticky note labeled “motivation” — one of our codes from the codebook:

I asked one of the researchers to share something that stood out to her under this theme. She shared an observation that our initial hypothesis was maybe a flawed assumption — we assumed that usability problems were preventing users from completing a process on this website. It was her impression that users likely got distracted in their own lives and didn’t have sufficient motivation to complete the form. After she shared her observations, I made a sticky note summarizing what she said:

Next, I asked the other researcher on the project if she observed anything related to that, or something entirely different. This third researcher mentioned that she wondered if users got to a certain place in the form and realized it was different from their expectations and so maybe they quit. She observed that users had a different impression of this website than what it actually was, and maybe they learned that later on in the process, and decided they didn’t want to complete it. I made a sticky note recording her observations:

Finally, we discussed them. In doing this exercise, I have found it is helpful to think about the following questions when discussing discrepancies:

  1. How related do we feel these are? Are they similar threads under one category or are they materially different?
  2. What are these observations telling us about user behavior that’s important for us to know?
  3. What is our end goal? How do these observations help us make actionable decisions and recommendations?

After some discussion around these questions, we determined that both observations were related to motivation — how do we keep users engaged when they have to take a break in the process, and how do we manage users' expectations to tap into relevant motivators? Since our ultimate goal is to build a prototype, these observations elicit data to build a prototype that sets appropriate expectations with users and addresses the ever-pressing problem of attention bandwidth. Thus, we kept them and included them in the report.

Our team replicated this process until we felt that we had covered the code book, and resolved discrepancies. In the end, we had a board that looked something like this:

Step 5: Write the report

This step becomes much simpler after you’ve spent this much time in the data. I took a stab at writing the report and sent it off to my team to review and help me find quotes. We organized the document into themes in the same way they showed up on the Miro board.

At Marketade, we use this approach with our clients and they love it. Turns out, when we use it on ourselves, we love it, too! It makes the analysis process much less lonely, but it also holds us accountable for our conclusions and recommendations. Having really smart, accomplished people challenge your observations only serves to strengthen them, and you’ll be that much more prepared when you present to clients and/or stakeholders.

--

--