Researchers must be participants: A case for immersive ethnography in UX

Torang Asadi
UXR @ Microsoft
Published in
6 min readJul 28, 2020
Photo by Dima Pechurin on Unsplash
Photo by Dima Pechurin on Unsplash

II did my best ethnographic work milking goats and washing dishes in a strict religious commune, deeply engaged in participant observation. Helping with the farm at 4am and the post-dinner cleanup allowed me to have in-depth conversations with members, and to truly understand what it was like to live communally, by immersing myself in the experience.

Participant observation, sometimes called immersive ethnography, is the most comprehensive, accurate form of gathering qualitative data. For example, Ann Allison’s Nightwork (1994) is the result of months of research conducted on a Japanese club while she herself served as a hostess. She observed the club dynamics and conversed with the male patrons, their wives, the women who served them, those who owned and operated the club, and the many other actors involved. Most importantly, her deep insights and comprehensive understanding of the actual experiences of Japanese hostesses were a result of her immersive participation.

In UX research, we observe through contextual inquiry in a user’s natural environment and user testing, but rarely participate — maybe marginally through competitive analysis or self-testing a product. In the UX context, immersive participation is costly, inefficient, difficult in terms of stakeholder buy-in and recruiting, and not always applicable. Also, designing a product introduces a fundamental bias. However, there is one experience in UX worth exploring through immersion: that of our research participants.

Immersive Ethnography in UX Research

If we think of our participants as users and our research tools as products, a new experience enters the equation. Since we’re making decisions and recommendations based on data obtained from these participants, it’s important to account for their experience during testing.

How do we do this? By participating in research from the other side of the table. In other words, by building an empathetic understanding of what it’s like to participate in our own research projects.

For example, I signed up as a participant with the unmoderated remote testing and “crowd intelligence” survey tools we use at Lenovo. I took around 5 user tests and responded to around 500 survey questions. This has given me a good sense of what it’s like to be on the other side of a study.

In addition to learning from the work of other researchers, this has allowed me to:

1. Empathize with participants to

  • Design better tasks
  • Be mindful of the time commitments for each test
  • Regulate the information provided to the testers
  • Know the tricks they use to speed up the process and get the test over with
  • Understand why they want to use such tricks in the first place
  • And have a better sense of what to expect of them in light of the specific incentive they’re receiving

2. Know the interface well enough to

  • Understand when it creates biases or introduces new variables
  • Know exactly what those biases and variables are
  • Better design the test
  • And account for any perception issues that might affect their responses

3. Better understand the feedback to

  • Gauge the effectiveness of screeners
  • Have a better idea of what questions testers think they are answering
  • Know why users focus on certain things and not others
  • And account for when the interface/platform creates a bias by shaping certain responses

4. Most importantly: know the quality of my data and

  • Identify the shortcomings of each dataset
  • Know when and why data is reliable
  • Know when and why data isn’t reliable
  • And understand the anomalies and outliers

Immersive Participation in Research Tools: Two Examples

1. Unmoderated remote user tests

I spent around half an hour taking screeners, finally qualified for a test, spent around 20–30 minutes, and was compensated $10 — that’s an average $10/hr. I also noticed that in addition to becoming weary of screeners, I also became somewhat suspicious. Some screeners are very extensive, and you’re bound to become curious about whether the tool holds on to this data, especially since testers are not compensated for them. Conspiracy theory? Most probably. But it didn’t keep me from growing bitter about screeners and feeling comfortable lying to them. As a result, my screeners have become shorter, but more pointed.

I also found myself trying to rush through tasks, guessing at what the researcher is trying to ask, and only paying attention to what I think the researcher is asking about. This made my test much less organic. I was also uncomfortable giving harsh criticism and would accompany any negative comments with a complement about something else I was seeing, something I noticed a lot of participants doing when watching my own tests as well. This has changed the way I design tests, especially how I set up scenarios to frame tasks.

Many tests use condescending language (e.g. “you will receive a 1-start review if you x or y”), leave you without a rating, or give you a low rating for unfair reasons (e.g. “tester did not give detailed enough feedback on the 11th task” on a test I spent 40 minutes completing). So my experience as a tester has made me much kinder, especially since I know that many testers do this as an additional source of income.

2. Large scale insights

There are a few things about being a crowdsourcing research participant that we should understand as researchers. The tool we use at Lenovo rewards an average of 3–5 points per multiple-choice question and 5–20 points per open-ended question, and you get $5 for every 1000 points; that’s $5 for every 200–300 multiple-choice questions or 50–200 open-ended questions. Not much. After a month of answering questions every now and then, I have a little over 1000 points, or $5 worth of gift cards.

As expected, participants try to fly through these questions as fast as possible, which means many are providing false responses. The tool addresses this issue by randomly asking trick questions to make sure you’re paying attention and providing good quality responses. Or for questions that ask about your response to a certain video, image, or marketing material, it doesn’t provide you with the question until you have opened the media.

In a recent study on a Lenovo Yoga laptop, we noticed many responses about fitness and health, demonstrating that the users were responding to the word ‘Yoga’ and not to the contents of the media attached to the question. Having been immersed in the platform as a participant, I knew this was because the media opens in a new tab, so you can just close and return to the question without actually engaging with it.

Just as I had with user tests, I experienced some suspicion when answering these survey questions as well. Since you cannot sign up without providing detailed personal information (including your home address), I felt as if my responses were being gathered and proving the tool with a highly personal profile. I developed this feeling in response to certain questions that asked about my preferences, what I would be doing on New Year’s Eve, whom I lived with, how I spent my Sundays, where I bought my groceries, etc. I automatically began to lie when answering these questions.

This experience has drastically changed the types of questions I ask, but most importantly, it has taught me to properly manage the resulting dataset. Without a deep understanding of how the data is produced, we won’t know its shortcomings. And, in fact, these shortcomings can also show us the data’s strengths and hidden insights.

A note on the ethics of participating in research as researchers: we introduce a certain bias in another researcher’s datasets. For some tests, this may not be an important bias, but others use screeners to weed out participants who work in market research, UX, and other similar fields. In addition to being honest with these questions in screeners, I also made sure not to take too many tests and stayed away from ones that asked me to look at products similar to what I research.

If we are careful about the biases we introduce, immersive participation in our research tools is a highly effective way of designing better research and understanding how participants provide feedback that comprise our datasets. This has been one of the most useful ethnographic practices in my experience as a UX researcher.

The UX Collective donates US$1 for each article published in our platform. This story contributed to UX Para Minas Pretas (UX For Black Women), a Brazilian organization focused on promoting equity of Black women in the tech industry through initiatives of action, empowerment, and knowledge sharing. Silence against systemic racism is not an option. Build the design community you believe in.

--

--

Torang Asadi
UXR @ Microsoft

writer of ethnographies. lover of films. eater of foods. researcher of designs.