Our test lab helps clients to gather “emotional feedback”

Our agency has recently upgraded its Dublin lab. This is how (and why) we run user tests.

Illustration by Vedrana Bosnjak
This article was originally published on Xwerx blog

We’ve always been firm believers in the benefits of watching real customers use the software we design and have conducted many types of user tests over the years. From rapid paper prototyping with people we literally pluck from the streets right through to formal tests with functioning software, we understand how critical it is to validate our ideas in front of people at all stages of the design process.

We recently upgraded the lab at our office in the Digital Depot in Dublin, allowing us to conduct testing sessions with a wide range of desktop and mobile devices.

If you need to understand how your customer responds to your digital product (regardless of its stage of development) we’d love to hear from you.

How it works

Our test users are briefed by a facilitator in terms of the tasks they will be asked to undertake during the session.

The facilitator guides the session, allowing remote stakeholders to participate through a live screen share (we use join.me).

Every session is recorded using lookback.io. Following each session, the video is uploaded to lookback.io, annotated for key points and shared with stakeholders for further review.

The testing sessions are underpinned by our UX experts’ advice on how to plan and manage the study. “We help clients to determine what they want to learn, which groups of users they should talk to and what they can do with the results”, explains Matthew O’ Sullivan, Senior UX Designer at Xwerx.

What you want to learn

Understanding the usability pain points of an existing interface, measuring the value of a new feature, comparing option A with option B — we have conducted all sort of user tests.

The decision to observe a user in a lab (rather than in his usual environment) and to talk to him in-person (rather than remotely) are sometimes led by specific requirements such as confidentiality or strict impartiality. But more than anything, a testing method should be determined by what exactly you want to find out.

“While automated user tests are great at revealing what users do, they sometimes lack context to explain why they behaved that way”, observes Matthew O’ Sullivan. Moderated research allows designers to gather in-depth qualitative feedback, to help us understand “what the users truly value”.

Dealing with confusion

By way of example, we were recently testing a new sign-up process for a large service provider. The participants — a group of customers from our client’s competitors — communicated some difficulty in picking a service plan at the start of the conversion process.

Quotes such as “Oh my … There’s a lot going on here!” helped the facilitator to dig into the user’s feelings of being overwhelmed and confused. After the review, the product team decided to revise the page presentation and to reduce the number of plans to select. Two versions of the screen — one with a card based layout and one with a table layout — were later tested during a new round of lab sessions.

Perceived value

This type of insight, which provides the facilitator with a sense of the user’s “emotional experience”, often justifies the time invested in moderated lab sessions.

Especially since some valuable tests can be done quickly and easily.

The benefits can be huge. Good UX on critical pages will not only improve conversion rates but will increase the subjective worth that a user attributes to the brand itself.

Some resources about user tests

By Flavien Plouzennec

Like what you read? Give Xwerx a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.