Prototype v Pretotype

A discussion on twitter has ignited this debate which I thought died 5 years ago. Like the #NoEstimates movement, it appears we’re stuck in the wording of something and it all goes back to an original definition from Google.

Introduction

This has been written before. So I won’t reinvent the wheel.

The article is from 2011. A different time in our history. I’m not one for words to convey concepts, as that’s been the central nub of the failure to progress a number of movements. Indeed, it is the central reason the #NoEstimates movement failed to move on from arguing about what is and isn’t an estimate. That is 6 years. It’s infuriating, a total waste of time and I less than care for it. If someone isn’t willing to understand the concepts behind something, then their view isn’t worth my time or anyone else’s. However, here I present a number of devils advocate arguments, in part to convey the reasons for and against (both exist) the existence of a new word for a “low fidelity prototype”.

Disclosure, me and Geoff G have not seen eye-to-eye on some things before, though I respect what he does, he’s an avid and important contributor to the field and think he’s well worth following. I just challenge things and that is naturally abrasive.

Prototypes in Production Environments

In pretty much every industry, a prototype is a functioning product or service. If you’re trying to prove that a concept product is systemically valuable, you need to have something that adds that value and reliably prove that people see value in it.

Aston Martin Speedster Clay Model being leafed

Such value is not defined by the product or the company. It’s defined by the market. So there are two things that need to happen:

  1. Locate and identify value in the market (e.g. a solution to a problem) — This creates the “fitness metric”
  2. You prove that your product viably addresses that market (e.g. provides a good solution) — You work toward realising that fitness metric.

Everything else is sugar. It’s what gives it its flavour. But if it provides no nutrition, it’s useless and may eventually kill you.

The difference in prototypes and pretotypes requires a necessary understanding of both of those two concepts. One or other isn’t sufficient.

Identifying value is done by using a repertoire of skills (User Research is one, Guerilla Testing another, literature review/meta-analyses a third, statistics a fourth…) but crucially, involves validating hypotheses within the business environment. Hence, it all depends what question you wish to answer. It’s why we have validation boards and some of us apply statistical research methods to the exercise (being a bit hard nosed, if you don’t apply the latter, you’re not doing research. Especially since the risk of type 1 and 2 errors are high and any subsequent spend on inconclusive results is the worst gamble of the lot).

The simplest question, usually with the greatest value, is

“Is this a viable solution to the problem?”

When is a Pretotype, not a Prototype?

The problem with the vast majority of the commentators on the thread bashing the idea of pretotypes is they don’t seem to understand the latter of the points above and thus, how unimportant the words are. Sadly, this is very common. This is a critical gripe of mine on the LS community, my LS community, of which I am a part (and so wish to see improve, for collective benefit). The vast majority of those I’ve met in the LS community have a total lack of analytical literacy and there is a very good chance their experiments yield no real value bar what can be achieved by accident with a few simple changes (which is often OK, but can easily dry up).

There I’ve said it. Taboo. Especially as I’m part of this. For clarity, it’s certainly no more or less than the general population, but that naturally means there is a general lack of understanding of how to both measure need and provide optimal value. Make no mistake though, all goal-aligned actions provide value in some form.

So let’s walk through the one sentence and explain what I mean.

You PROVE…

This means the learn phase of Lean Startup (or any methodology) must reliably close knowledge gaps. It means experiments must have:

  • A null good hypothesis — The opposite of what you’re trying to prove. Note, the quote above is not a null hypothesis.
  • Be randomised…
  • Run against a control group (one of the A or B must be it) — Up to here, we’re all on the same page.
  • Use a big enough sample to deal with type 1 and 2 errors — Do you know how big your sample needs to be to ensure that the likelihood of your results being wrong is small? (small-p) [1]
  • Deal sufficiently with type 1 or 2 statistical error — Most start to fall here as they don’t account for covariates and confounding variables in their analyses. An example, of why that is important is to be found here.
  • Design and sequence experiments to segment the search space as fast as possible…[2]

Any of those falling down means you have failed to disprove the null hypothesis, in turn meaning you cannot accept the alternate (i.e. your assumption/hypothesis — the validation is inconclusive). So you have not proven it.

…Your product VIABLY…

This means that stakeholders, including yourself, do not find the implementation of your product prohibitive. This involves:

  • Running experiments for as long as needed, but no longer (the benefits of small-p optimisation fall away very quickly after that). Meaning that once you’ve run your experiment to bracket [1] above, you should move on to the next validation point. Anything else is just waste. You gain next to zero knowledge about it in a randomised population and can even put off other paying work to waste this.
  • Ensuring that the product and it’s development can be manufactured for a “benefit” (profit, public health good or something else) and thus, deliver that value. This also means taking account of the cost of production of all prototypes and products, the sales and marketing channels etc. Thus…
  • Systemically delivers value and knowledge pretty much all the time — No capital expenditure and surplus value in each transaction and aligns the important units (e.g. time-wages and fixed-costs and opex and Rate of Return). Note, this aims for a consistent Rate of Return, which note, also has a time variable (e.g. paying someone an annual wage means you have a hurdle you have to get over to break even, never mind growth).
  • Producing experiments that can split the search space as fast as possible, for the amount of money available for it. Which completes bracket [2] above.

On that last point, the illustration I give when speaking is to ask someone to think of any whole number between 1 and 100. Your job as speaker or another volunteer is to guess what that number is. As someone paying for experiments, you know you can segment that entire space and find it in 7 guesses [3]. That puts a natural cap on the amount of money you need to spend due to the number of experiments you need to run. The individual experimental cost then becomes the main variables, not the number of them. Indeed, the cheaper you make each experiment and experiment set, the more of them you can run, the further your cash stretches and the more likely you are to find that fit. That is viability in action :)

ADDRESSES that market

This is a crucial point. In LS, it asks “Is it the wrong product?” I probably don’t need to explain this, but most problems have more than one solution or possibly (though rarely) none. It’s why math folk insist on uniqueness proofs for any solution (i.e. is that the only one) on top of existence proofs (i.e. does a solution exist) and of course, anywhere where there are more variables than there are equations creates a natural optimisation (aka “viable”) space where any solution that meets the constraints, including the value, is viable. This is great for design, as it allows us to come up with a multitude of different design ideas.

Viable product regions with ugly math symbols

The issue though, is the testing of them. This returns us to the above point on viability. Your aim as a startup it to go on for as long as possible without burning through all your money, accounting for inflows of cash, and avoid shutting up shop. What some authors like to categorise as “the survival phase”. To do that, your LS aim should be to maximise the amount of learning you get for the money you put in. Indeed, get it wrong, and it then ceases to be viable, even if you hit the big time (see fated tales of ye olde dot-com startups that ran delivery services for 60 dollars an hour who delivered 10x 50 cent chocolate bars in that time — loss making).

Where do “Pretotypes” fit?

The answer is in the spaces which prototypes don’t. Call it a “low fidelity prototype”, that’s fine. I care less about the wording than the concept (and arguably, so should everyone else. But we Brexited and elected Trump, so we are where we are).

The PalmPilot 3 low-fidelity prototype (aka pretotype). Given this was a completely new product, with new manufacturing methods, a functioning prototype simply would not have been viable.

Yet, with the sweet aims of the statements above, which I don’t think anyone in the community disagrees with, it also comes with some drawbacks.

  • In product design, a functioning prototype can cost an inordinate amount to do and take a long time to build. As well as hitting you in the pocket, this also means it takes time away from the experiment and thus, makes the feedback cycle longer. Bring the costs of that down, but ensure you adhere to the points as raised above.
  • Initial experiments should control variables to ensure that covariates and confounding variables do not interfere with the experiment and skew the result (this is different to the role of a fully functioning prototype, which aims to address whether it systemically delivers value to the market. Much like unit and integration testing does for software). The low fidelity prototypes (pretotypes) help assess whether you’re building the right product and answers some of the earlier questions. Think 1hr spikes in software development. Prototypes are the things that systemically tell you whether you’ve built the product right.

That last point is crucial to the discussion. In those situations where you can adhere to the two points at the core of this post, including that a high fidelity prototype can answer the question just as well and is just as viable, pretotypes and prototypes can easily be the same thing. The costs are roughly the same, yet improve the systemic feedback cycle (as you will need less cycles to converge to the solution — smart experiments in number [3]) and hence, improve the rate of return overall. Indeed, software is a perfect place for that to happen. Writing software has zero capital costs.

However, we have to be careful that we don’t assume product contexts are the same. They very definitely are not, as otherwise there is zero uncertainty and waterfall development would be the methodology of choice.

In situations where pre and prototypes aren’t alignable, a low fidelity prototype (aka pretotype) is useful to answer a subset of the questions associated with the product. The issue here is that the existence of other variables mean the scope or specificity of the resulting knowledge is susceptible to errors due to incomplete effects.

For example, and we’ve probably all had this, a customer looks at a low fidelity prototype and rejected it, even though you can later show them a high fidelity prototype of exactly the same thing and they accept it (false negative) or vice versa. It needs more information the low fidelity prototype doesn’t have and can’t give you. it isn’t fit for purpose any more. Hence why scope (e.g. the choice of question you need answering) is essential to the choice of tool and thus, fidelity. Some questions can be broken down and re-segmented in such a way as to make “pretotypes” viable to answering them. However, where they fail to do so, they’re just pretotypes. A rebel without a cause.

Locate and identify value in the market
Prove that your product viably addresses that market

E

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.