Outcome-Driven Growth Marketing

5 ways we learnt to grow faster by thinking slower

Experimentation is the defining virtue of Growth Marketing.

What sets growth marketers apart is the understanding that success does not come from having a god-given opinion that is smarter than everyone else’s.

From “Hacking Growth” by Ellis and Brown

“The results came 100 percent from running cycle after cycle of growth hacking process as fast as we could, learning what worked and then doubling down on those winning tests to drive even more growth. “
— Sean Ellis & Morgan Brown

But fixation on experiment velocity alone can be dangerous

And if I may be real with you for a few minutes, I’d like to share how I got burnt from misunderstanding this advice, and wasted many of my early growth efforts at Topology Eyewear.

1. The Slowest Growth Comes From Wasted Growth Experiments

My Junkyard of Wasted Experiments. Image Credit

Avoid waste, plan slowly

When our startups have an expiring runway, the one thing we cannot afford, is to waste an experiment.

By wasted experiments, I don’t mean failed experiments in terms of “we didn’t get the answer we were hoping for.” I mean “we didn’t get a conclusive result at all.” or “We got a result but we don’t completely trust it

So what we learnt was to slow down and put the time into proper experiment design, so that we don’t waste time on experiments that deliver no useful learning.

It starts with writing each experiment down clearly in advance. I favor the format Barry O'Reilly proposed in his definition of “Hypothesis Driven Development.” Although he may not have intended it for Growth Marketing, I think it translates very well..

Example experiment plan with description of hypothesis, method and variables

Learning: Always consider the total cost of a wasted experiment

Sometimes we quickly fall in love with an experiment idea. And the temptation to “just run it and find out” is very compelling. But let’s pause to consider what that could actually cost us.

2. Beware The Under-Funded Experiment

Under-funding an experiment be like: (credit)

Learning: Think about “Time to Good Data”

We can either give an experiment a relative “trickle” of users over a long time, or spike our UA spend to have the experiment hit confidence earlier.

Specify the resources, time and especially budget that the experiment needs to succeed.

If the organic audience does not provide enough users to get a meaningful result in time, or if the burn rate of a slow experiment costs more than buying the audience quickly, then it is better to write the check to Facebook and be done with it.

Yes, this is basic stuff so far.

If you are new to Growth Marketing, I hope you learn from my mistakes here. If this seems obvious to you, I hope the next, more recent learnings are more insightful…

Think slower to go faster

This soundbite is of course a homage to Daniel Kahneman’s book “Thinking Fast and Slow.” The premise of the book is that each of us has two “systems” of thinking that interpret information and make decisions:

  1. System 2 is our “Slow” brain.
    It allows us to carefully analyze, interpret and decide. With practice and effort we can make better decisions, but it takes much more effort, and is tiring mental work in practice.

3. Avoid Unsurprising Experiments

Err, yeah. I guess that was kinda obvious. (Credit)
For each possible outcome, consider why it could have happened, what the drivers were.

How to Pre-Mortem Your Experiment Outcomes

  • For each variable, consider what would it take for the variable to have “won?” What would the measure or signal be?
  • Now fix in your brain the idea that outcome HAS happened. Don’t try to debate whether or not it did, convince yourself for a moment that the outcome did happen.
  • Now ask yourself: Why did it happen?
    What are the plausible reasons why that outcome occurred?
  • Now consider: What does that tell you about this experiment?
    Is it something obvious and unsurprising? If so, do you still need to run the experiment?
  • Does this raise bigger questions about the design of the experiment, and in fact should you change or even skip the experiment?

4. Skip Underwhelming Experiments Completely

When you wanted 10X but you got 10% (Credit)

Any experiment that delivers 10% when you needed 10X is a wasted opportunity, and slow growth.

And a high velocity of lots of low yield experiments just amplifies the losses, not gains.

Challenge yourself to put a number to the target for each variable to win.

Learning: Estimate Maximum Upside Potential

My fast brain used to give up on this too early. Again my lazy “System 1” would protest “I don’t know what will happen, that’s why I need to test!” And so I confess to having run many experiments that were technically successful, but didn’t deliver— and in fact never stood a chance of delivering — the growth we needed.

5. Model The Outcomes And Actions

Oh yeah? Really?! And then what?

If this experiment created X result, what would we do then?

For example, at Topology we wanted to test a major change to our growth model, and decide between 3 possible variables:

  • A two-week sprint cycle to develop and release the software
  • 5 figures of media spend to get enough audience exposed to the test
  • At least 10 days of experiment duration to give time for results to mature

So if there is only one viable winner, there is no need to run the experiment at all!

So we decided to stop all work on the experiment, and just apply the state of C in the app and go with it. The results were immediate and game changing. And those extra few hours of modeling-out the outcomes and subsequent actions meant that we both saved the cost of the experiment AND delivered the growth to the business 4 weeks earlier.

Adding “Next Action” to canvas to consider what we would do next if outcome occurred.

Conclusion: Thinking slow and growing fast

While I still agree and believe that experiment velocity can be a leading indicator of growth, you must save yourself from making the same mistake I did.

Do not focus on experiment velocity without equal focus on experiment quality.

To be fair to Sean Ellis and Morgan Brown, they never did recommend velocity at the expense of quality, and there is a fair chunk of the same book that is dedicated to topics such as experiment design and sample sizes. But I for one missed the importance of the connection and I hope this helps save some of you from making the same mistake.

  • Upfront, slow, brain-taxing consideration of the outcomes and responses to experiment significantly increases the impact of growth experiments that we choose to run
  • Committing to ONLY running high-outcome experiments and THEN increasing the velocity of experimentation can deliver outsized growth returns for your business.

Thinking carefully about the outcome is the key.

Thanks for reading, I hope it is helpful to you. As a thanks for you having made it this far, here is our draft experiment plan that you can try using yourself.

Example experiment canvas. Click here to try it yourself.

Chief Marketing &Growth Officer at Topology Eyewear :: @guesto