Don’t ask users “would you use this?”, ask “would you pay for this?”

Validating the usefulness of a feature is still somewhat of a new concept to design teams. We’re certainly used to validating the usability of a design, but less so that the design is an effective and desirable solution to the person’s problem.

When design (and even business) teams do attempt to validate the business value of design, the question is frequently tacked onto the end of an interview without too much thought. This is unfortunate, because business validation is perhaps one of the most important things you can do for your product.

Many product teams who want to validate feature concepts recognize that validation is tricky, but are not sure how to do it. They may ask a task based question, like, “does this achieve [the objective]”. Questions like this are extremely leading and don’t lend themselves to understanding the value of the thing you’re proposing.

Recognizing this, an interviewer may then think of questions which wouldn’t be leading. Enter the oft-asked but highly ineffective “would you use this?”

“Would you use this?” is packed with ambiguity

When a interviewer asks about the business value of a design, they typically want to know if the person would find it useful. It feels natural to translate this directly into a question, “would you use this?”. It’s not a leading question, and on face value, seems that it would produce straightforward results.

The problem with “would you use this” is that it is vague, hypothetical and asks almost nothing of the user.

When you ask “would you use this?” you’re effectively asking a person to assess the actions of a hypothetical future version of themselves. This relies on someone to correctly self-assess their future actions, values and habits. The problem with this is that the interviewee with always imagine the best possible scenario. This is called an “optimism bias”, as people are more likely to always imagine themselves in the best possible situation later in their life. Of course they can see using your time-saving and life-enhancing product one day down the road.

On top of this, people are famously bad self-reporters, and now you’re asking them to report on something they might do at some time in the future.

Asking “would you use this today” alleviates this problem to a degree, but the question is still non-specific. The frequency and depth of “use” is not specified. It could mean just once, or only for a few minutes. It’s like asking “would you try this?”. Of course they’d try it. It requires no commitment from the person using it.

Ask “would you pay X for this?” instead

Where X is the price of your product or enhancement. I’ve found it effective to start with a dollar and work my way up by multiples of 10.

This is a powerful validation question. This achieves the result of discovering if the person would value it enough to use it everyday.

When someone considers a purchase, their brain performs a myriad of mental gymnastics to qualify the purchase.

A big part of this purchase decision is frequency of use, another is pain relief. If the proposition isn’t great enough to justify forking over the cash, they won’t do it.

Even though it’s still hypothetical, you’re making the action more visceral the interviewee by asking them to commit to a cost which they would be willing to pay. Might you come back with a finished product and expect them to pay what they said they would?

If you were actually asking for money (which many people do!) this question is what sales people call a “direct close” because it’s the most direct way to ask someone to decide to buy. It’s the “action” part of AIDA (Attention, Interest, Desire, Action) framework as popularized in The swear-laden fear-inducing pep talk in Glengarry Glenross.

If you think about, user validation act like the worlds lowest-pressure sales pitch. You’re asking someone (who is presumably the target audience) to give you their opinion on a product you’re creating. If this meets their needs, they’ve probably built some degree of interest and desire. Now you’re asking them to take an (almost) real action: to part with some number of dollars, right now, to have this product. Asking “this is available today, would you pay X for this?” is a close as you can get to asking for a sale in a user interview.

Also ask questions which require the user to sacrifice something

“Would you pay X for this?” is a question which requires the user to consider giving up their money, which also translates to time and effort, to get your solution. Depending on what you’re trying to test, there are things other than money that you can ask the person to provide.

On a recent project, I was validating a design for a status report. It was a fairly limited feature that was really only designed for people already using the product, so asking them to pay more to have this extremely basic feature felt odd. Since this was replacing a process they were doing already (status reporting to stakeholders) I wanted to see if it was accomplishing this goal.

Instead of “would you pay for this?”, I asked “would you forward this on to stakeholders/customers in place of your existing report?”. This is a question which requires some thought to answer because it means replacing something they know is working now. It also means that this feature is powerful enough to risk a piece of their reputation with a client or team by sending it to them. If they said “no”, I’d ask “would you unsubscribe from this email or filter it out?” to understand the degree of disinterest. I then asked what was missing from the report we designed, and how we could change it to make it forward-worthy.

Set goals before any interviews

Business validation is a very different beast than usability validation. Even if your users can easily accomplish user testing tasks, far fewer will say they want to buy it, or make some other sacrifice to have it. This is why you should set goals early for validation questions to ensure you’re not biasing the test.

If your audience is more specialized, then presumably your solution more directly addresses their needs. It also means that your audience is relatively smaller, and fewer people will encounter (or need) your product. In this case, you would want to set your goal higher, like 6/10 “yes” vs. “no” responses.

If you’re building for a broad base of users, you might want to set a more modest goal, like 3/10 “yes” to “no”, as your audience is potentially much bigger, meaning a much larger volume of encounters with the product. It also means that needs may very differently from person-to-person.

Categorize your results

Qualitative feedback is difficult to benchmark, and strong validation questions are no different. Rarely does “will you pay for this?” yield direct “yes” or “no” answers, but often “yes/maybe” with some caveats.

To work with this challenge, I suggest categorizing your responses. This could simply be “yes” and “no”, where maybe’s translate to no’s. If you want a wider variety from yes and no, you could also break out the maybe’s into “maybe-yes” and “maybe-no” to see a more detailed categorization.

Here are two examples from a recent project, both for Pivotal Tracker. I was testing a few status reporting concepts with both customers and internal users (Pivotal Labs consultants).

The first design, a multi-project dashboard, our team was much more hopeful about. I asked customer and internal PMs alike “would you pay for this?”. Many internal users didn’t feel comfortable with this question (since they get the tool for free) and so I instead asked “would you send an expense request to get it?”.

Based on the responses of the customers I spoke to, the proposed design didn’t seem to be validating as we had hoped.

The second design was the status report email I mentioned further up in the article. It was produced directly from user research findings. The design itself was incredibly simple, but we were surprised how well it tested.

Since this was a simple feature that we already had represented elsewhere in the product, it felt odd to ask them to pay for it. Instead I wanted to know if it would replace their pre-existing email reports, and asked “would you forward this on to stakeholders/customers in place of your existing report?”.

As you can see from the two above charts, they are almost polar opposite in their results. The status email was very well received. And despite its simplicity, many of the interviewees who previously said they would not have paid for the dashboard said they would pay for the email.

That being said, if we were to be cutthroat about sticking to our original goal (60% “yes”) and converted any “maybe’s” into “no’s”, then our email test was invalidated with a 50% success rate.

Ultimately, gauging the success of failure of a validation is up to you and your team. We could say that the email is still a valid design because it may also increase engagement. We could also say the multi-project design is valid because those who said “yes”, while few, were big customers willing to pay us an outsized amount of money for it.

Regardless of what you decide, asking strong validating questions and plotting the results gives you a lot more to work with. I’ve found it yields more direct and fruitful responses from users. Both in terms of what they need from your product, and if it’s worth doing at all.

This article was originally posted on my website