This article suggests rethinking what we learn when we put concepts and prototypes in front of customers for feedback.
What does ‘user testing’ mean to you? The most common flavour is ‘usability testing’ — which actually makes more sense as a term because it does not sound like we are testing the ‘user’. We put products and services in front of target customers to find out if they are easy and delightful to use. Right…?
So why is ‘user testing’ a trap then?
To answer this question, we need to take a step back and reflect upon why and how we evaluate services. Let’s say, company Y wants to launch a new service. They create some sorts of prototype to visualise parts of this service and get customer feedback. They might first test a paper-based prototype or (more often) an interactive mock-up of a mobile/web app. Some would go straight into developing an MVP— based on the assumption that they need real data to get real insights.
The next step is to invite people, give them the prototype and ask them to do a specific task or just explore the service, while thinking out loud. Then ask for feedback on the overall experience, probing on particular parts that should be improved and what the potential customer thinks of the service as a whole.
Our questions define the kind of insights we gather in this process. Are we asking the right questions?
As a design researcher, I have noted over the years that — regardless of the type of prototype we test— we can gather insights on 3 very distinct levels:
- The design. Comments on the interaction, the overall flow, colours, visual hierarchy, confusion with labels, copy etc — these are very common examples of feedback on design. People reflect upon how the prototype has been put together and compare it with their previous experiences with other services. They state what they like and what they don’t and why. This type of feedback can easily mislead us to we lose sight of what’s important. The fact that they like the design doesn’t necessarily mean that this is the right service for them.
- The idea, the concept. How many times have I heard: “This is great! Amazing idea! I love it…”. This is the biggest trap of all and I have seen product managers fall for it, too many times. A very positive response to an idea does not necessarily imply that the customer would use the service — even if they actually state they would! Why? Because people like to be part of a cool idea. They immediately think of friends or people they know — who this great service would be for — instead of their own needs. A cool concept is not necessarily a meaningful and useful one. Another issue here is that people also try to please us. We asked for their opinion, their natural intention is to make us happy and many believe that they should encourage us by giving positive feedback (unless we explicitly ask for criticism). If you want to understand better this effect, read the Mom Test.
- The perceived value. This is the hardest nut to crack. People give us insights on whether they see benefits for them or not, but more often we need to probe them to get their real view on the service. This is the moment when I put aside the prototype and ask the person for specific examples: how could this service support them in their lives, in which ‘moments’ would it make sense to have it and what would they expect the service to do for them. We need to be open to alternative scenarios and service ideas. We are likely going to encounter constraining factors that nobody has considered yet. But these are our key strategic insights, the ones that will determine the success or failure of the service.
The problem is that ‘user testing’ — run in the traditional way — puts most emphasis on design rather than value.
This is why it is a trap, because many companies evaluate the design of a product — instead of their value proposition. Is this service addressing a real need or solving a significant enough problem? Is there a fit between what the customer needs and what the service is offering?
The more detailed and refined is the prototype that we are testing, the more the customer feedback naturally tends to be focused more on the design and the concept, rather the value of the service.
The real question one must ask is: what am I actually trying to validate by putting this prototype in front of potential customers?
You can get insights with all these different types of prototypes — but you do need a skilled researcher who is able to shift focus towards the important questions of ‘what & why’ rather than ‘how’. A professional who is also able to look underneath people’s fascination with a new idea to find out if it could truly could serve their needs.
On the other hand, we often do need to visualise the full detail of the service to get a complete picture. I was recently testing a new concept that would be driven by data and behavioural patterns. This would have been quite difficult to explain to people without detailed examples. Participants really needed them to understand what the information means and how it could help their decision making. Then they were able to tell me whether this service was relevant to them or not and why.
Prototypes don’t have to be interactive though. Depending on your research questions, simple paper mock ups can go a long way to have a meaningful conversation about service benefits. They are also much more cost effective and fast to produce!
I believe we need a new definition for ‘user testing’ one in which we focus our efforts on evaluating value, rather than design.
It is time to break free from the traditional user testing approach: to pause the ‘test’ and instead explain to people the elements they they might be missing or not understanding so well. It is best to ‘walk someone through’ a prototype in order to get their reaction to the benefits of the service, rather than to test if they can figure out how to use it.
We are talking about a ‘concept test’ rather than usability testing. Concept testing, however, has been associated traditionally with market research and methods like surveys and focus groups. The ‘prototypes’ used for concept tests in market research are typically landing pages, flyers, promotional materials. Even SurveyMonkey seems to associate ‘concept tests’ with testing logos and corporate identity. I question whether these techniques really help us understand customer needs or how to solve their painpoints. I see them more as way to validate whether the marketing material resonates well and whether the service is sellable. There is a big gap between market research and human centred design research.
The Lean Start-up movement has addressed this gap in two ways: a) talking to customers about their problems and b) developing an MVP. A great approach, though many start-ups and entrepreneurs struggle to define what this MVP really should be, what are the core features versus nice to haves and where to draw the line. Others have trouble getting the right insights when talking to their customers, because they tend to follow a sales strategy, rather than a research approach. Listening should come before selling.
This is where human-centred design research plays a big part. We apply a mix of storytelling techniques to get early and honest feedback.
For example, for new, early stage services the use of storyboards in one-to-one interviews with potential customers can be quite a powerful tool. Storyboards provide context (where am I, what am I doing, what is my challenge), something that is missing when we put user interfaces in front of people to test.
Another advantage is being able to gather insights for services that rely a lot on human and physical touchpoints — not just digital. What about a ticket office? Or customer support? We do need to be able to capture the perceived value and customer expectations for those touchpoints as well.
Before giving prototypes to potential customers to evaluate, you need to be crystal clear about the question that are you trying to answer:
- Do they have a need or a problem I can solve with my service? (problem validation)
2. Does my service satisfy their needs, do they see value? (service & need fit)
3. Do they understand how my service works and how to use it? (design & usability)
Based on this choice, you need to prepare the right set of questions and define the appropriate level of fidelity for your service prototype. Value-focused questions are more important than design-focused ones. There is no point in creating a design that people love, for a service that they don’t really need.
Thanks for reading!
Share your thoughts by answering just 3 questions and contribute to the next article: How might we design ‘user tests’ that focus more on the value of a service?