Outcome-Driven Growth Marketing

5 ways we learnt to grow faster by thinking slower

Chris Guest
13 min readJul 2, 2019

Experimentation is the defining virtue of Growth Marketing.

What sets growth marketers apart is the understanding that success does not come from having a god-given opinion that is smarter than everyone else’s.

Our success lies in our humility to recognize that we don’t actually know all the answers, and in our ability to apply the scientific process (Hypothesis, Experiment, Analysis, Repeat) to discover empirical truth.

Indeed, many amazing people within our community have written about the importance of experiment velocity as a leading indicator of growth.
In other words:

The faster we cycle through growth experiments, the faster we can deliver growth to our business.

For one example, in their 2017 book “Hacking GrowthSean Ellis and Morgan Brown share how they identified experiment velocity as the key driver of their growth.

From “Hacking Growth” by Ellis and Brown

“The results came 100 percent from running cycle after cycle of growth hacking process as fast as we could, learning what worked and then doubling down on those winning tests to drive even more growth. “
— Sean Ellis & Morgan Brown

But fixation on experiment velocity alone can be dangerous

And if I may be real with you for a few minutes, I’d like to share how I got burnt from misunderstanding this advice, and wasted many of my early growth efforts at Topology Eyewear.

I’ll then share 5 ways to avoid these mistakes, including what I believe to be some fresh perspectives that I hope can help even the more advanced growth professional.

1. The Slowest Growth Comes From Wasted Growth Experiments

My Junkyard of Wasted Experiments. Image Credit

When I first read Ellis & Brown’s story of the importance of experiment velocity, I perhaps took it a little too much to heart.

What I heard loud and clear was the truth that most experiments fail, almost by definition. Therefore the more experiments we perform we the more successes can flow through in the numbers, and so we grow faster. Right?

And so what followed for my team at Topology was a regular cadence of experiment brainstorming, prioritization, and proud evangelism of how many experiments we had in play at any time. At the time I admired our tenacity and work rate, but now I recognize that:

I was too obsessed with experiment velocity,
and under-appreciative of experiment quality.

Avoid waste, plan slowly

When our startups have an expiring runway, the one thing we cannot afford, is to waste an experiment.

By wasted experiments, I don’t mean failed experiments in terms of “we didn’t get the answer we were hoping for.” I mean “we didn’t get a conclusive result at all.” or “We got a result but we don’t completely trust it

Like any scientific endeavor, the difference between good science and bad science is experiment design.

So what we learnt was to slow down and put the time into proper experiment design, so that we don’t waste time on experiments that deliver no useful learning.

It starts with writing each experiment down clearly in advance. I favor the format Barry O'Reilly proposed in his definition of “Hypothesis Driven Development.” Although he may not have intended it for Growth Marketing, I think it translates very well..

We believe that…<Idea>
We result in <Outcome>
We will know we are successful when…<Metric/result>

What I like about this format, is that it forces us to consider both the outcome of why we are doing this experiment, and also what the threshold for success would be. This is now the first step in our growth marketing experimentation process.

Example experiment plan with description of hypothesis, method and variables

What we find in practice is that it is often very difficult to word these answers, but this is exactly the point. It tends to be when we don’t know what outcome we really want, or what the measure of success is, that we waste an experiment.

Learning: Always consider the total cost of a wasted experiment

Sometimes we quickly fall in love with an experiment idea. And the temptation to “just run it and find out” is very compelling. But let’s pause to consider what that could actually cost us.

When you look at it like this, you can see how all tolled, the cost of even a quick experiment can run into thousands, or even tens of thousands of dollars.

Which leads me onto my next pitfall in growth experimentation…

2. Beware The Under-Funded Experiment

Under-funding an experiment be like: (credit)

When I first made the transition from working agency-side for big brands to working for a startup, I was acutely aware of the need to be as scrappy and frugal as possible. And so I often made the mistake of setting up an experiment, but under-funding the user acquisition.

Every experiment needs “n” — users exposed to the experiment — and the volume of users and results needed to gain significant results is usually higher than we realize.

Simply put, growth experiments need users. Lots of them!

This was an especially acute problem at Topology, where we make custom-tailored eyeglasses, sized and designed from your iPhone. Because of our unique offering and funnel, we need to ensure we are driving the right customers that go deep into the funnel, not just upper-funnel results.

So while it feels like an egregious abuse of startup funding to drop thousands of dollars into Facebook ads, when you consider the above equation, it quickly makes sense.

Learning: Think about “Time to Good Data”

We can either give an experiment a relative “trickle” of users over a long time, or spike our UA spend to have the experiment hit confidence earlier.

In typical startup early days, our organic audience is small, and our runway is short. So when you actually do the maths of the cost of slow vs fast,
fast usually wins.

Specify the resources, time and especially budget that the experiment needs to succeed.

To get to this, we work backwards from the results needed and the expected conversion rates, to get to the audience needed to be exposed to the experiment. Then we compare that to our organic audience and see if that will give us users fast enough. There are many online resources to help you calculate statistical significance, so I won’t elaborate further here.

If the organic audience does not provide enough users to get a meaningful result in time, or if the burn rate of a slow experiment costs more than buying the audience quickly, then it is better to write the check to Facebook and be done with it.

Yes, this is basic stuff so far.

If you are new to Growth Marketing, I hope you learn from my mistakes here. If this seems obvious to you, I hope the next, more recent learnings are more insightful…

Think slower to go faster

This soundbite is of course a homage to Daniel Kahneman’s book “Thinking Fast and Slow.” The premise of the book is that each of us has two “systems” of thinking that interpret information and make decisions:

  1. System 1 is our “Fast” brain,
    which quickly makes judgements based on intuition and other instincts. It can help us with fast decisions and reactions, but is susceptible to bias, and can mislead us when used to make certain decisions
  2. System 2 is our “Slow” brain.
    It allows us to carefully analyze, interpret and decide. With practice and effort we can make better decisions, but it takes much more effort, and is tiring mental work in practice.

In my last few years working in Growth Marketing, I’ve devoured many books, articles, slide shares and “canvases” dedicated to accelerating the flow of experiments through a growth team.

But what I learnt from Daniel Kahneman’s book is more fundamental advice on how to make better decisions, and how valuable it can be to engage my “Slow” brain to appraise a potential experiment before jumping in.

For example:

3. Avoid Unsurprising Experiments

Err, yeah. I guess that was kinda obvious. (Credit)

Have you ever run a growth experiment to learn something, only to learn something that was completely obvious in hindsight?

:Raises hand:

This is not because I didn’t try to predict it in advance, rather because I was asking the wrong question and attempting to use my fast brain, rather than my slow brain for prediction. The fast brain says “I don’t know what will happen, so let’s test and find out” or at least it gives up on the question too easily.

In the book Thinking Fast and Slow, Kahneman describes various biases and problems that make us humans bad at forecasting, such as “The Planning Fallacy,” where we over estimate our chances of success, and under estimate our propensity to fail.

Kahneman offers as a solution the idea of “Pre-Mortem Meetings”, where team members are given a different framing to imagine that their project has failed, and state with the benefit of imagined hindsight why this could be.

From experience, I can say that the pre-mortem approach works extremely well at identifying issues that my fast (optimistic) brain had not seen. So now at Topology we try applying this to growth experiments too, and are seeing similar benefits.

For each possible outcome, consider why it could have happened, what the drivers were.

How to Pre-Mortem Your Experiment Outcomes

  • For each variable, consider what would it take for the variable to have “won?” What would the measure or signal be?
  • Now fix in your brain the idea that outcome HAS happened. Don’t try to debate whether or not it did, convince yourself for a moment that the outcome did happen.
  • Now ask yourself: Why did it happen?
    What are the plausible reasons why that outcome occurred?
  • Now consider: What does that tell you about this experiment?
    Is it something obvious and unsurprising? If so, do you still need to run the experiment?
  • Does this raise bigger questions about the design of the experiment, and in fact should you change or even skip the experiment?

To help invoke the power of hindsight, imagine that you yourself are the audience of the test. What are some of the seen or unforeseen factors within the test that could lead you more towards the control/champion variable?

In practice, this extra planning work has enabled me to see previously unforeseen problems with the experiment, and address them before running the experiment.

One of the greatest benefits of this exercise is that it can help save you from the instance in which you don’t trust your own data. If you’ve ever run a good experiment and got a clear result, but then post rationalized why you can’t trust that data, you may know what I’m talking about.

4. Skip Underwhelming Experiments Completely

When you wanted 10X but you got 10% (Credit)

Sometimes, in a post-product/market fit business, any growth is good growth. If A beats B by 5%, then fantastic. Any optimization is good forward progress.

But I hardly ever work in such an environment. In the pre-product/market fit world that I love, we are looking for game-changing differences of 2X, 5X or sometimes more. Order of magnitude differences, not optimizations.

In our world:

Any experiment that delivers 10% when you needed 10X is a wasted opportunity, and slow growth.

And a high velocity of lots of low yield experiments just amplifies the losses, not gains.

So to guard against this, we should ask ourselves:

What target does the experiment variable need to be a success?
Does the experiment have any believable opportunity of delivering that scale of result?

Challenge yourself to put a number to the target for each variable to win.

Learning: Estimate Maximum Upside Potential

My fast brain used to give up on this too early. Again my lazy “System 1” would protest “I don’t know what will happen, that’s why I need to test!” And so I confess to having run many experiments that were technically successful, but didn’t deliver— and in fact never stood a chance of delivering — the growth we needed.

I’ve learnt that I seem to have a bias for optimism, and I sometimes fall in love with the magical outcome could come from a new idea or hypothesis.

I consider my colleague and Topology COO Rob Varady to be indispensable in his ability to help me figure out the maximum upside. He is just a genius at quickly quantifying exactly how big the best possible outcome could be.

For example, we were once testing the impact of only acquiring users with iPhone devices of X or newer. I was concerned that the additional filtering out of older devices had dwindled our audience to such a niche that it had taken us way off target.

Rob thought for a second then with a quick Google search revealed that iPhone X or newer devices then accounted for about 50% of all iPhones in the USA. “So,” he explained, “The maximum upside of opening to all iPhones cannot logically be more than 2X what our current reach is. Unless you think older iPhone users have a higher propensity to buy Topology?

This is a super simple example, but one that is hopefully easy to understand. There are many more examples with more complex variables, but perhaps I’ll save those for another post.

I hope this makes it easy to see that the “Impact” factor of the “ICE” (Impact, Confidence, Ease) experiment weighting can actually be calculated, not just guessed at. This is also now a standard part of our process.

5. Model The Outcomes And Actions

Oh yeah? Really?! And then what?

Perhaps the greatest gains we achieved in growth effectiveness came from considering the decision tree of the possible outcomes of an experiment, and what that would lead us to do next. In other words:

If this experiment created X result, what would we do then?

For example, at Topology we wanted to test a major change to our growth model, and decide between 3 possible variables:

A. Customer pays $10 to receive a prototype of their glasses
B. Customer pays full price and proceeds
Straight to Final order
C. Customer gets to choose
Optional Prototype OR Straight to Final

Upfront we did not know which the users would prefer, or what the economic returns of the optional prototype route would be. So we designed a thoughtful experiment, which would require:

  • A few days of product design to change the UI
  • A two-week sprint cycle to develop and release the software
  • 5 figures of media spend to get enough audience exposed to the test
  • At least 10 days of experiment duration to give time for results to mature

All in all, it was an expensive experiment, but we’d pre-modeled it based on the previously described 4 pillars and we were excited to see what benefit it could drive.

Then we asked ourselves the question: What if A won the experiment? What would we do next?

The first realization was that the threshold for victory was not the same for all 3 variables. There were other significant downsides to option A compared to option B or C, so A would need to “win” by a much higher threshold than B or C for us to consider implementing it for the future.

Then we realized something even more important: There was virtually no believable way that A would ever achieve that result. We had plenty of prior data that already taught us we did not wish to continue with the offer. So then we looked again at Option B and realized the same was true. We already knew it could not win by the margin it would need to for us to choose it.

So if there is only one viable winner, there is no need to run the experiment at all!

So we decided to stop all work on the experiment, and just apply the state of C in the app and go with it. The results were immediate and game changing. And those extra few hours of modeling-out the outcomes and subsequent actions meant that we both saved the cost of the experiment AND delivered the growth to the business 4 weeks earlier.

Since then, we now always ask ourselves the question — “What would we do next if we got this result?” And this has led us to completely skipping several experiments that we would otherwise have wasted resource on, and move on to more impactful work.

Adding “Next Action” to canvas to consider what we would do next if outcome occurred.

Conclusion: Thinking slow and growing fast

While I still agree and believe that experiment velocity can be a leading indicator of growth, you must save yourself from making the same mistake I did.

Do not focus on experiment velocity without equal focus on experiment quality.

To be fair to Sean Ellis and Morgan Brown, they never did recommend velocity at the expense of quality, and there is a fair chunk of the same book that is dedicated to topics such as experiment design and sample sizes. But I for one missed the importance of the connection and I hope this helps save some of you from making the same mistake.

And moving beyond basic experiment hygiene, and investing further in hard, slow thought before jumping into experiments, I realized that:

  • The fastest experiment velocity comes from completely skipping predictable or underwhelming experiments altogether.
  • Upfront, slow, brain-taxing consideration of the outcomes and responses to experiment significantly increases the impact of growth experiments that we choose to run
  • Committing to ONLY running high-outcome experiments and THEN increasing the velocity of experimentation can deliver outsized growth returns for your business.

Thinking carefully about the outcome is the key.

Thanks for reading, I hope it is helpful to you. As a thanks for you having made it this far, here is our draft experiment plan that you can try using yourself.

Example experiment canvas. Click here to try it yourself.

If you have questions or disagree, I would love to hear from you. You can find me on twitter at @guesto or LinkedIn at /in/chrisguest or of course you can follow me here on medium Chris Guest for more.

Cheers!

--

--

Chris Guest

Traction Designer & Founder @ TractionDesignCo.com | Former CMO at Bryte, Topology, AKQA (Audi, McLaren, Ferrari). Focussed on investible traction for startups