A Practitioner’s Reflections on Strategyzer’s new book

Before we start
What you are about to read are my humble reflections on this new book from Strategyzer. I’ll give you 3 reasons why I “love” this book and 2 aspects I think deserves more attention.
“Lasse —so what experience do you actually have doing experiments?” you should be asking. Well, here’s the ultra short version of my background.
- Business educated
- EX-IBMer
- EX-co-founder of high tech startup
(read our 3 articles about how we could have failed faster using experiments here) - Now management consultant helping corporations create growth and become a little bit more like startups
(lately I’ve been a part of a small group working with Eric Ries — see him here in Denmark at our Thought Leader Event ’19)
During these different endeavors I’ve tried the these experiments (named as in the book):
- Letter of intent - Split test - Mock Sales - Data sheet - Life-sized prototype (high fidelity) - Clickable prototype - Paper prototype - Pre-order, Mash-up - Concierge - Card sorting - Buy a feature
- And then a lot experiments we run at Implement on various customer projects when implementing change initiatives (something that should be an appendix to this book called “Testing Change Ideas”)
Let’s get started — TL;DR version
Is it worth your money? If you’re developing new ventures, products or services as a startup or inside a big corporation — YES. Osterwalder, Bland and their team did a really great job !
You can get a really great peak into the book here (+40 double pages of the book released as a teaser)
__
My inner experimentation nerd loves this book. Like a lot. Here’s why:

- It is needed
Almost everyone wants to do experiments with real customers — very few know how to do that well. This book helps you with “the how”. - It’s “easy” yet “hardcore”
In the best Strategyzer style you can read this book in a day flicking it through and become a lot smarter. At the same time it clearly explains how rigorous and scientific you have to be to succeed just a little bit with experimentation. - It skips the concept of MVP (also a shortcoming)
In my view the term Minimum Viable Product is more discussed than used. “Testing Business Ideas” covers all aspects of experimentation without going into the discussion of what a MVP is or using the term even once.
The “Practitioner” in me (a management consultant that has to make this stick in the reality of large corporations) thinks the following deserves a little bit more attention

- Is this a corporate plug-in or a reason to change everything?
Does it require corporations to rethink or remove their stage-gate, the way they fund projects, the way they are organized, strategize etc. or can they “just” replace their front-end with this or incorporate it little bit in their Product Development Process? - Experimentation is 60% generic 40% specific. This book mainly covers the first part.
The second part about what is specific for a company’s customer relations, industry regulations etc. is missing, but still very much needed if you want to actually do it
Too Short; Tell Me More — Part 1: My inner nerd
Love it reason #1: It is needed
When I found out that Strategyzer was working on this book I was working on an article titled “The ultimate guide to real experimentation”. During late nights after working with a customer located in the remote part of Denmark I tried to gather everything about “MVPs”, “Customer Discovery”, “Assumptions”, “Experimentation” etc. in of place from all the books I’ve been using myself, prototyping slides, Medium articles, definitions etc.

Why? Because I thought it was needed. Books like “The Lean Startup” and “Value Proposition Design” convinced the world that experimentation is needed, but left a lot wondering: “how”?

In my experience most corporations know how to test for feasibility. They do prototypes, quality tests, manufacturing tests etc. The technical uncertainty and risk of malfunction within any product are very low when launching.
But when it comes to desirability and viability most corporations use what I would call “fake evidence”.
The desirability question is often covered either by a visionary Product Manager that knows what customers want better than the customers themselves (somehow) or by an “insights study” which is, in worst case scenarios, outsourced to an agency and then distorted through hand-overs from agency to insights department to management and then sometimes to project teams.
The viability question is covered by estimations and predictions. Using trend reports, market studies and competitor analysis teams build five-year business cases. But those involved in making and approving these knows how fast they become “wrong”. A week after it’s approval a competitor can launch a product that changes e.g. our differentiation strategy, price point price point or our marketing claim with game changing consequences for the business case and the product itself.
We continue to do these activities because it feels like we decrease the vast risk and uncertainty inherent in any project/strategy and because it seems like they are the best tools we have right now to deal with that uncertainty. But as Testing Business Ideas clearly demonstrates: there are valid alternative ways of dealing with desirability and viability uncertainty.
So in my experience it addresses a very clear need in the market right now: “Help us understand experimentation and make it a core part of our business”.
Love it reason #2: It’s “easy” yet “hardcore”
Involving customers way too often becomes focus groups interviews or making a generic target segment study about pains, gains, needs etc. The results you get from these interactions are hardly reliable and easily questioned by management or peers (rightfully so).
We should engage with customers and come back with strong evidence — not empty words from a bunch of customers “saying” what they are going to do.
In order to achieve “management-level-decision-ready” evidence you need to think like a scientist. That’s how hardcore you need to be about experimentation and customer engagement in general.

- Who are the participants in your experiment?
- How many of these do you need to get strong evidence? (do not guess — investigate what sample size you need)
- What exact metric are you are looking for? (# of customers pre-ordering, conversion rate of AD, # of customers putting your fake box in the basket for x reason — it is not enough to say “let’s see what happens”)
- What does “good” look like exactly? (define the specific outcome that either validates or invalidates your assumptions)
For many UXers, marketeers and definitely for engineers engaging with customers may be scary. Setting up all these criteria for making the engagement valuable makes something already scary also very hard.
Love it reason #3: It kills the term “MVP” (also kind of a shortcoming)
Is a “Minimum Viable Product” just any experiment or does it have to be an early version of your product actually delivering value to the customer? Should it be named Lovable Product? Testable? Usable?
In my experience the term MVP is more discussed than used. Ask the next colleague you’ll meet how they define MVP. My assumption is that the following 2 things will happen:
- their answer will be long, vague and messy
- you will probably not fully agree with them
Recently, I was at a customer kick-off workshop. One of the participants pointed at the famous MVP poster below and said “If you can teach us how that one actually works, in reality, here in our company I want nothing else from this project”.

Also recently, while implementing a new Product Development Process at a customer working in hardware a senior VP of R&D expressed his concern with the guiding visual we’d given teams below: “This looks like we are going to deliver lower quality products to customers — I don’t want that”. This shows that linking the experiments you do early on with the final product is a confusing relation to some.

The list of articles written defining a MVP is long and does not point in one clear or usable answer (here are some that discuss it)
- https://medium.com/the-happy-startup-school/beyond-mvp-10-steps-to-make-your-product-minimum-loveable-51800164ae0c
- https://link.medium.com/tIxRBKHdOZ
- https://www.linkedin.com/pulse/idea-validation-condensed-guide-itamar-gilad/

The term MVP was invented to probe developers to launch earlier to get real market feedback on their product ideas. The value of the MVP thinking is therefore to reduce an uncertainty fast.
Adjusting the focus from MVP definitions to Experimentation is a great move to get people back on the track of reducing uncertainty — either by “just” showing a data sheet to a customer or by really launching an early version of their final product, selling and delivering it to customers.
The downside of this move is that teams end up doing a lot of small experiments to gain a lot of insights and fail to strive towards that first sale to a customer — the ultimate evidence.
I’ll end off this nerdy section with a quote from a conversation my team and I had with Eric Ries in December; “Launch should be the beginning of Product Development — not the very end”.
Too Short; Tell Me More — Part 2: The “practitioner” in me (the Management Consultant)
Is this a corporate plug-in or a reason to change everything?
Reading the book I struggled a little bit understand how big of a change Bland and Osterwalder suggest, which is natural as the book is written for a very wide audience. For startups or solo-entrepreneurs this is just a matter of doing it. No one is stopping You.
But for teams inside corporations there are a lot of things preventing them dealing with uncertainty through experimentation like this book proposes.
- Most stage gates do not ask for Viability and Feasibility evidence
- Most reward systems reward engineers to do as little rework as possible (hence prototyping for customer engagement will hurt functional/personal KPIs)
- Funding is provided yearly through budgeting processes — not based on leading indicators and traction/growth
- Most Product Development Processes are split into phases with ownership- and team handovers
These few examples of barriers should make it clear that making experimentation a core part of today’s product development requires changes in the deep systems of any large corporation. Funding and reward systems act like the gravity of a corporation. Experimentation might take off in a couple of teams, but if nothing is changed in the way they are funded or reward gravity will bring these teams back down to status quo.

Alternatively, and as a first step, corporations could replace the often very fluffy front end and insight departments with a rigorous experimentation process. Like a small cross-functional team de-risking strategic growth areas before they are turned into product development projects and the big expensive teams are staffed.
Experimentation is 60% generic 40% specific. This book only really covers the first part.
The 60% generic part is:
- Realizing that all business ideas are based on assumptions
- Stating these assumptions as testable hypotheses
- Designing experiments that will yield the results you need to make a decision (not just empty words from a focus group)
- Carrying out experiments in a fast and easy way
The 40 % specific part is all that is specific to your corporation. In my experience this is an overlooked part of experimentation. I’ve often encountered or heard about customers that wanted to do experimentation, understood the theory behind it, acknowledged the examples from other companies but then concluded that it could never work for them. “We can’t risk hurting our brand”, “Our sales people will not let us talk to customers”, “Experimentation is only for software companies” or “Competitors will steal our ideas!”
To succeed
- How many customers you have and your relation to them

There is a big difference designing an experiment in a B2C market with a million customers vs. a B2B market with 3 big customers owning +90% of the market.
In a big B2C market you can easily test with 1,000 customers and gain statistical significance while doing so without the majority of the market being exposed to the experiment.
In a small B2B market you need much more commitment from your customers to get strong evidence due to the low number of customers while the entire market might be exposed to the experiment.
- The regulatory and safety nature of your industry

In a service business e.g. private house cleaning you could launch a new concept tomorrow without any approval from authorities or giving a lot of thoughts to the safety of your customers.
But if you are making medicine or producing brakes the reality if quite different.
- Hardware vs software

Experimenting in software is just way cheaper and easier than in hardware. You can easily create trustworthy mock ups of a smartphone- or desktop app and use it as a clickable prototype using Adobe XD or InVision or for a simple landing page testing conversion rates and willingness to pay.
For hardware the same two experiments require a lot more. As a minimum a landing page requires a 3D CAD drawing that can be rendered so it looks like a finished sell-able product. A “clickable prototype” requires 3D printing, sourcing components, wiring etc.
Hardware products takes more time and costs more money to develop than software products. The risk is therefore also higher. Experimentation is therefore even more needed. But as a hardware company you need to be really creative and find your way to fast and cheap experiments. There are a ton of different ways to do this — doing hardware is no excuse for skipping experimentation. It’s the exact opposite.