Product design with conviction: evidence-based, user-centred design

Julian Harris
Knowcast
Published in
3 min readJul 2, 2023

Our field work was founded with humility: “we might have an idea what people need but let’s actually observe them and find out”. We found out. But then what?

The most helpful definition of startups I use is “projects that may not yet have a business model”.

  • It’s unclear what the customer base is, if there is one at all.
  • There isn’t necessarily a clear industry of competitors either.

How do you find these things? I think we had a good sense of the customer base in terms of needs, but solution? Another story entirely.

Starting with price

A landmark podcast I listened to around the time of the field study was “price before product” on Starting Greatness by Mike Maples Jr.

In essence, test “willingness to pay” before you build a product was the message. The interview promoted a book, Monetizing Innovation, that was essentially a promo for the professional services of the author. This is what I found useful:

  • Best practice pricing insight would come from using the van Westendorp survey method to bracket pricing. In essence it gives the user a free form number box to price what would be “too cheap”, a bargain, starting to become expensive, and too expensive.
  • Best practice for feature priorities is using maxdiff which allows users to answer “what are the least and most important features in this list”. Turns out users can more reliably answer these than “rank these features”.

How do you ask users these kinds of questions?

But to get accurate insights into willingness to pay and feature priorities, how do you test van Westendorp and maxdiff? This was harder than it seemed.

  • Most “general purpose” survey tools” don’t offer these question types (SurveyMonkey etc).
  • That left “specialist research” enterprise (= “expensive”) tools (like Alchemer) with opaque and difficult-to-compare pricing models amounting to thousands of pounds, lengthy conversations with account managers on survey configurations, and basically a lot of faff over weeks.
  • Alternatively there is a professional services community that does this research. They offer services using their own resources for these kinds of questions. Still thousands of pounds.

Does this best practice product design work really have to cost thousands of pounds?

We spent many calendar weeks investigating this, pretty unhappy that “all we need is surveymonkey with maxdiff and van westendorp”. It seemed crazy that there wasn’t an option.

Until there was, just in time. Enter Survey King. They completely disrupted the “specialist research” market by offering these much more advanced survey tools for mass-market pricing. We did a survey with SurveyKing and it cost us $80. BOOOM.

SurveyKing was the first “mass market” survey tool that offered “advanced” question times we’d identified as best practice for willingness to pay research. They also offer Conjoint and Gabor Granger, other advanced types we might use in future. Previously these were only available to specialist tools costing thousands.

We were very happy we were execute product design surveys that matched what Monetizing Innovation’s best practice.

But how do you test willingness to pay for a novel concept?

The case study I remember in Monetizing Innovation basically paved the way to the launch of the Porche Cayenne. Users gave great feedback on price and car features (cup holders vs alloy rims for example). This was very interesting.

So we designed a survey and ran a small test first against some of our early audience.

We listed the features, and asked people to price this (using van Westendorp) and rank them using maxdiff.

One of the survey panels we used as a basis for “maxdiff” survey questions to get a sense of relative priority. Too many novel ideas here that users can’t comment on with authority. Cf car cup holders and alloy rims, that they will already have a good sense of.

They bombed.

Why?

None of our participants understood many of the features, because they were novel ideas.

Sadly the Monetizing Innovation book came up short and attempts to contact the author drew a blank. We were on our own, which seemed crazy. Hopefully by the time you read this there’s a body of knowledge out there for startups to bridge this gap.

What was key was that the users need to have some visceral experience of the new concepts to be able to comment on them. Otherwise they have no idea and you’ll get questionable data.

And with this we entered the rabbit hole of how to prototype in the wild. Spoiler: it blew out our product development times from days to months.

Read on for that adventure!

--

--

Julian Harris
Knowcast

Ex-Google Technical Product guy specialising in generative AI (NLP, chatbots, audio, etc). Passionate about the climate crisis.