The fastest, cheapest, safest way to a hit product

We all want our products to be successful. We want to grow faster, sell faster, and take the fewest risks in doing so. Here’s how.

Joanna Weber
Bootcamp
7 min readMay 18, 2024

--

Illustrative photo of a graph by Markus Winkler on Unsplash

Imagine three teams.

The CEO offers a prize to the first team that can deliver a product that sells more than £5000 per month in revenue.

Each team has a product manager, product marketing manager, two engineers and a designer, and can borrow the time of other cross-unit functions or develop a business case for additional resourcing. They have a small operational budget and a determination to win.

Team A

The product manager looks through helpdesk tickets and customer reviews for ideas, and the product marketing manager talks to the sales reps. They think they have a pretty clear idea of what’s needed.

The product manager gives the engineers a list of requirements and they give the designer a rough sketch of what’s needed. The designer develops some wireframes, they give feedback, then the designer creates a mockup from which they develop their MVP (as in, ‘working product’).

They produce a survey, and 30 out of the 45 people who answered the survey said that they would buy the product. It’s a surefire hit!

The speed has been amazing: just three weeks from challenge to launch! Now to collect feedback, make a few tweaks, and collect the prize.

The feedback … well, that’s the tricky part. They’re getting signs of interest, but the few who sign up aren’t really engaging with it.

The analytics show people opening the product, clicking on a couple of tabs, and closing it again.

They put out another survey, but only half a dozen people respond. Most say they like it, and there aren’t any clear answers why it isn’t gaining traction. It is growing, but just … not enough.

The designer sets up some 15-minute calls with a handful of users, and they all struggle to complete the user flow.

Months of frustration follow, with extensive redesigns and user testing. Every new prototype is sent back to the drawing board after testing. Any costs they’ve saved by launching in under a month are dwarfed by the cost of fixing technical debt and usability issues, and they’re nowhere near product/market fit.

It’s going to take a lot more work.

A Gantt chart showing the progress of Team A

Team B

The first thing that the product manager does is to commission two reports from the research department: an analysis of the market landscape including competitor positioning and audience segmentation, and a needfinding/discovery project of deep dive interviews into user needs.

The manager reviews the research and draws up a detailed list of requirements, which are conveyed to the engineering team. The team presents the designer with some working software and asks the designer to build an attractive web page to house it.

The designer makes it look beautiful, and the team proudly shows off their product, which has been built in secret to the meticulous standards of the requirements list over eight gruelling months, with careful reference to the customer insights in the research report and every imaginable feature they could desire.

The team launches the product amid fanfare.

Tumbleweed.

The team can’t understand why consumers aren’t delighted — surely this is exactly what they asked for!

A Gantt chart showing the progress of team B

Team C

The product manager submits a business case to ensure sufficient resourcing for continuous access to a market researcher and a UX researcher for at least the duration of the project. She asks the researchers to facilitate a workshop for the whole team, where they map out their assumptions in the form of personas and test them through user interviews, conducted by a researcher with the product team present.

At the same time, the product marketing manager and market researcher map the landscape and competitors. They make sure the UX researcher asks about those competitors in the user interviews. They reconvene and compare: a list of validated user problems, their current (competitor) solutions and the weaknesses of those solutions.

The product manager highlights the problems that align with company strategy, and which would be most profitable to solve. They choose a favourite and the designer leads them through a game of How Might We …? to narrow themes for exploration.

They sketch and debate ideas, and choose a few. What would have to be true to make this a winning idea?

The first three are discarded right away: a quick google search on a phone shows they are completely unsuitable. The fourth would be too costly to build; the fifth wouldn’t have enough customers.

They move onto the sixth and sketch out eight completely different ways to do a similar thing — really, just a stick man and a few bullet points explaining the idea (for example: a ‘mechanism to provide feedback on homework for students’ might be a game app, a test book with answers, or a tutor service). They choose three, refine them, and invite in some colleagues and industry experts who immediately point out weaknesses. They show users and one of the ideas is eliminated.

Two left. One of them is revisiting a concept that had previously been tested with users and the market demand for something of its type already validated elsewhere, so research on user preferences would be a pointless duplication. It isn’t something you can show in a drawing — it’s a functional piece of code, where 100% of the risk is in whether it physically works in the end user’s environment. The only way to test it is software — but they are not asking if users like it, only if it works.

The designer works with the engineers to narrow it to the smallest possible units of value — just build one thing to do one thing, and don’t worry about how it looks. The engineer builds it in three days.

The other one is a risky idea: it would be really cool but they don’t know if it’s what users actually want. All they want to know is if, in principle, users would buy it.

The designer creates a page on the website with an advert for the (non-existent) product and a button to ‘buy’ it now. When users click, they are taken to a sign-up form to join a waiting list.

In the meantime, the researcher sets up half a dozen calls with would-be customers fitting the validated persona profile, and invites the engineers to watch the user being manually ‘walked through’ the undesigned product with conversation taking the place of the interface. They use the software sample and find that it doesn’t function correctly in the end user’s environment: if it worked, it would solve the need.

There are few sign-ups to the shadow page and the non-working software is scrapped. The team revisits the board from the workshop and chooses two more ideas to prototype. This time, one is a Figma prototype tested asynchronously and, for the other, the idea of a robot tutor with a chrome hardware shell is represented by an ipad, a cardboard tube and an empty soda bottle, which the researcher tests with passers by at a conference.

Gantt chart showing the progress of Team C
(Imagine a few feedback loops — design/development is iterative!)

Who won … and why?

Team A went to market in 3 weeks but spent 3 months trying to gain traction, which they never achieved. By the time they gave up completely on their idea, they had to return right back to the beginning, spending many months in the process.

The underlying problem is that they had never truly understood the needs of the customers they were solving. The research was conducted by people without research expertise: their biased survey gave misleading results. Since they hadn’t deeply investigated their user profiles, they knew customers were struggling, but didn’t understand why: they never discovered that their users didn’t have the specific domain knowledge that they assumed they would have, so every prototype had the same usability issues, no matter what changes they made.

Team B secured rigorous research upfront, but forgot that the world is not static. From challenge to launch was nearly ten months, by which point external events had overtaken the facts in the initial report: the market had simply moved on.

Team C invested the most in research, and recouped those costs in saved time. From challenge to elimination of the first two prototypes was 4 months, but because they still had another six promising ideas, it only took another few days to eliminate four and get lucky with a fifth.

Even though they were counting ‘failures’, they were just eliminating bad ideas more quickly. They matched the research method to the problem, so some ideas could be built, tested and eliminated in just a day! Once they were on the right track, their better customer understanding meant they got more right first time, and since they had spoken to users so often, they weren’t out of touch like Team B.

The fastest, cheapest way to a hit product is to understand your users.

  • Do it properly. Hire a pro researcher. Dive deep
  • Don’t do one-and-done research — talk to users at every stage
  • Generate a large backlog of great ideas and aim to eliminate as many as you can, as quickly as you can. In general, testing a mockup is cheaper than testing working code, and almost anything is cheaper than testing an MVP
  • Reduce risk by reducing the cost of failure: test small, cheap and often after investing in deeply understanding the problems in the first place.

--

--

Bootcamp
Bootcamp

Published in Bootcamp

From idea to product, one lesson at a time. Bootcamp is a collection of resources and opinion pieces about UX, UI, and Product. To submit your story: https://tinyurl.com/bootspub1

Joanna Weber
Joanna Weber

Written by Joanna Weber

UX research and product development | author of Last Mile

No responses yet