Design experts Daniella Marooney and Eric Russell, affinity mapping in the office (pre-pandemic). Photo credit: Laura Rogers

Pandemic-Friendly User Research

Delanie Ricketts
FiscalNoteworthy
Published in
7 min readJan 27, 2021

--

User research is inherently about researching and understanding people — what they need, how they expect tools to work, what pain points they have, and what motivates them. Often, this type of research is easier done in person. It can be difficult to understand how frustrating or easy something is for someone to do or how someone feels about a particular experience when you can’t observe their facial expressions or body language firsthand. For these reasons, prior to March of 2020, our user research team conducted most of our research in-person, whether at a participant’s office or within our own.

When the pandemic forced us to work entirely from home, we adapted our research process from primarily in-person to remote methods. As much as we miss in-person connections with each other, our stakeholders, and our participants, our new remote-mode of research revealed several surprising benefits. Techniques for gaining insight into people’s digital worlds, more efficient methods of data capture, and more sustainable and powerful analysis methods — all learnings from our shift to remote-only research — will continue to serve as best practices for our team regardless of our whereabouts.

Adapting in-person research methods to be remote

Before the pandemic hit, we were kicking off a project to map out the journeys of our target users in order to understand how our products and services fit into the larger context of people’s work lives. We wanted to immerse ourselves in our users’ environments — in-person — to gain a deeper understanding of their jobs, their pain points, and their experiences.

Of course, 2020 had other plans. After completing a single journey mapping session with a single customer, visiting other people’s offices suddenly became the worst idea ever.

We had to pivot — and fast. We had designed a paper-based journey mapping template. We had manufactured stickers for participants to map out the tools they used and their emotions. We had several sessions lined up that were supposed to take place in users’ offices. All of these materials and sessions were going to have to change.

The journey mapping template and stickers we developed and tested internally.

The first thing we did was convert our materials into digital formats. Instead of physically creating journey maps with participants using sticky notes, stickers, and our paper template, we created digital journey maps using Google Jamboard and Miro, sharing our screen as we went to validate that we were capturing participants’ experiences accurately.

While moving to a remote method presented challenges, ultimately the remote constraint revealed upsides, like making the digitization of the outputs of our sessions seamless. More importantly, the integrity of our research did not suffer. We were able to accurately capture participants’ jobs despite not being able to map them out in-person.

In addition to employing the journey mapping method to capture participants’ jobs, we asked participants to screen-share while they demonstrated how they did various tasks. This is known as the contextual inquiry method in user research. We opted for this method as most people will summarize and speak abstractly about tasks unless they are actually doing them. Observing how users work while they actually work is the best way to understand the nuanced reality of users’ work-related needs and workflows.

Remote contextual inquiry made it difficult for us to observe participants using anything physical, like physical notebooks or calendars. However, observing participants remotely via screen-share enabled us to gain detailed insight into how participants used various digital tools to get tasks done, perhaps even greater detail than if we were in-person and looking over participants’ shoulders.

If we had to go back and do it all over again — say in a time where we could travel safely without issue — we’d prefer the richness that being in-person adds to qualitative research. There’s just so much you miss out on in video calls that can be so informative — body language, environments, distractions, etc.

However, adapting our in-person research methods to be remote-friendly had several benefits. We captured valuable information with less resources than in-person research. We gained perhaps greater insight into the digital worlds of our users than if we were doing the research in-person. We also have completely digitized outputs from our research that took no manual effort whatsoever.

Ultimately, despite the pandemic throwing us some curveballs, we were successful in capturing the information our research sought to capture. Moreover, our participants were often happy for the opportunity to connect during a period where opportunities for human interaction were constrained. Even if we couldn’t be in-person to learn with one another, remote research proved to be a positive experience for all involved.

Analyzing qualitative data together, but apart

After gathering all of our journey mapping data, it was time to get to work analyzing it. Part of the reason we started the project of mapping out user needs was to challenge existing assumptions. Since grounded theory analysis is based on original research data rather than preconceived constructs, following this approach enabled us to develop emergent categories of user needs based on how users actually described their jobs. By developing categories of user needs from the ground up, we ensured we didn’t fall into the trap of simply looking to validate existing assumptions about what categories of user needs exist.

When following a grounded theory approach, researchers initially code data without restrictions — referred to as “open coding” — as these codes are entirely provisional. Coding data simply refers to highlighting parts of the data, in our case, transcripts of our journey mapping sessions, and applying tags or “codes” to them. Over time, researchers move into “focused coding” once enough data has been analyzed to solidify emergent codes, sometimes through “axial coding” (linking open codes together to generate more meaningful categories). These categories of codes form the foundation of the study’s theory, or in our case, our theory of what users need to do to do their job.

An example of how we link open codes together (“axial coding”) using Dovetail.

To increase the validity of our analysis, we ensured evidence from multiple participants triangulated our interpretation of the data. To increase the reliability of our research, multiple researchers coded the original data sets and cross-checked each others’ work. This is known as increasing inter-coder reliability.

Usually, we analyze data together, in-person. It makes it easier for us to discuss differing interpretations of what people said and align on how we want to code and categorize it. Often, we’ll use physical materials like sticky notes to create code categories and arrange examples spatially around them. This is known as affinity mapping. Physical affinity mapping with sticky notes is a great way to materialize complicated data into clear concepts and tangible themes and is perhaps the most common way designers analyze qualitative research data.

This was not in the cards this year, so we set up time to go through our data together over video calls. There is no substitute for a floor-to-ceiling wall of sticky notes. Organizing the massive amount of data we captured within the confines of a 13-inch laptop screen or, at best, a 24-inch monitor, is simply more difficult. It’s hard to see patterns across data when you cannot easily see all the data at once.

Nonetheless, forcing ourselves to remotely analyze our data had the beneficial effect of taking advantage of digital qualitative data analysis tools like Dovetail. Using Dovetail, we created codes and categories of codes together, using their built-in description fields to ensure we were documenting our shared interpretations of them.

Although it’s hard to beat the efficiency (and fun!) of affinity mapping qualitative data with others on a wall, forcing ourselves to practice digital qualitative data analysis made it easier for us to collaborate on our analysis sustainability and asynchronously. Through comments, we could document our rationale for why we coded something and tag other researchers to review and validate our rationale. Coding our data digitally also made it easier for us to iterate on our codes and code categories as we could merge, rename, and reassign codes with the click of a button. Plus, we didn’t have to deal with the issue of sticky notes falling off the wall. Our data was digital and available to reference into perpetuity!

Beyond the analysis process, since we coded our qualitative data into a centralized database, we were able to compare our data from this study to prior studies through a simple search. Using a centralized database also empowered us to create custom, reusable coding structures so that we could search and query data using standard parameters. For instance, we wanted to be able to query any data in our system based on what feature or specific noun within our products the participant happened to be referring to. To enable this, we created custom coding structures for each product line and populated them with the standard features and nouns found within those product lines. Using this structure, we could query all data in the system, regardless of what study it emanated from, to answer questions like:

  • What pain points do users have with our search functionality?
  • What suggestions do users have for our CQ product line?

While having physical space to display all of our codes would still be a helpful way to reference our coding structures and categories, coding the actual data and documenting our rationale digitally proved to be an invaluable practice that we’ll continue to do regardless of our work-from-home status.

References

  • For more information on journey mapping and contextual inquiry methods, see Martin & Hanington’s Universal Methods of Design (2020).
  • For more information on grounded theory, see Lazar, Feng, and Hochheiser’s Research Methods in Human-Computer Interaction (2017).
  • For more information on using qualitative analysis methods in UX research, see Amber Westerholm-Smyth’s Medium article, “Your personas probably suck. Here’s how you can build them better.”

--

--