Usability Testing in the Metaverse

EchoUser
EchoUser Stories
Published in
4 min readJul 8, 2022

This spring, a long-term client approached EchoUser with a project straight out of a science fiction movie — or better said, a project straight out of the mid-pandemic 2022 SciFi experience that is real life. Our researchers have decades of experience tweaking research methods to meet client goals and constraints, but this one pushed us significantly.

The client had been working stealthily for years on augmented and virtual reality (AR/VR), envisioning the future of virtual workplace collaboration. With the rapid shift to remote work, their prototypes suddenly had more urgency. They were coming out of stealth mode. Their customers were eager for new ways to have remote interaction feel authentic and to utilize 3D in presentation and product development. Was “…infinity and beyond” suddenly here and now?

The research method was fairly standard: a usability test, whereby participants are asked to complete different tasks and the product is assessed for its efficiency, effectiveness, and user satisfaction. However, the research protocol was highly customized. As is often the case, new technology immediately stretched common practices for user testing, and brought up key questions.

  • The experience required an expensive AR headset
  • How to get the technology in the hands of the participants?
  • The experience was collaborative in nature
  • How to test with more than one user at a time, or construct a single user test that simulated real world collaboration?
  • The experience happened mostly in the user’s own headset
  • How to see what they were seeing to evaluate their experience appropriately?
  • The experience had an inherently steep learning curve
  • How to differentiate observations about AR tech itself vs. the specific product?

Learnings
The learnings were rapid, and the testing involved vital piloting sessions, where we tweaked the approach. Ultimately, we encountered some key principles to take forward into future similar projects:

#1 In person test, virtual moderation

We solved for getting the participants the headsets by inviting them into a physical office. In hindsight, the control over the space and technology ended up being important in countless ways; while so much usability work today can be done remotely with common tools like webcams and screen share, we needed that in person support.

On the other side of the coin, we had a remote researcher moderate the test, donning their own headset. This allowed us to join and observe the participant in their AR environment, and was one tactic in helping us to form the collaborative experience important to the product. This mix of in person and virtual is something we’ll continue in the future.

#2 Reduce the technology

Initially, we had a tech heavy protocol with multiple computers and headsets, screencasting, and web conferencing. Yet much of that tech we introduced was not important to the user experience we were testing: there was so much tech to juggle, and if we misstepped, we were likely to be testing our research experience more so than the client’s product. In this situation, we learned to be ruthless with establishing contingencies and be ready to ditch any system that was not vital to what we wanted to learn.

For example, after seeing that the process we were using to record the participant view potentially impacted the headsets’ performance, we designed a contingency plan to evaluate the research on audio recording and the researchers’ observations alone. While we didn’t end up needing the contingency, it was a vital step to thinking through what was essential and what we could potentially strip away.

#3 Control for expectations

We learned we’re at a stage where most people still have an initial “wow” reaction to VR/AR products. It truly is the movies come to life. Yet we needed to work hard to get beyond that first impression to the real product experience. There were a few important tactics:

  • We recruited two groups of participants, 1) those who had not previously used VR/AR, and 2) those with AR/VR experience. We doubled the normal quantity of tests so that we could compare the two groups.
  • We heavily coached participants through the initial experience of setting up their headset without soliciting much feedback. By guiding them through unexciting tasks like calibrating for their eyes, we were able to mimic some of the likely onboarding they would have naturally completed prior to using the product. This seemed to move through some of the initial “expensive, shiny toy” reaction, so we could spend the bulk of the test on tasks that mattered to the client.
  • We added ranking questions before and after the test to measure not only their overall assessment of the product but how their baseline expectations had shifted.

Many of these are good tactics for any research project, but became so much more vital here. AR/VR brings the merging of our physical and digital lives, and demanded a research approach that thoughtfully merged the two.

So, EchoUser has officially completed its first mission to the metaverse. (Side note: if you’re still wondering what that word even means, check out McKinsey’s recent podcast series).

Our report back is that while much was the same, the research approach needed a fresh set of eyes and constraints. What did we miss? What else have you learned in applying UX practice to new realities?

Read more about EchoUser on our website. Or follow us on instagram.

Brian Salts-Halcomb

--

--

EchoUser
EchoUser Stories

We are an SF-based UX research and design consultancy. Solving complex problems with thoughtful design for 14+ years. Check us out at https://echouser.com/