Some things take time to brew.
A good experimenter knows that, and doesn’t draw false conclusions when there are no immediate results.
A few examples
Two years ago, I created Tweetable Text, a simple Wordpress plugin that makes individual sentences in your blog post tweetable. The vision was a better way for good ideas to spread, and the minimum success criteria was a handful of installs in a few weeks. I got none, so I moved on.
Recently I googled for it, and found that it had lived on unbeknownst to me. Over a year later, Tweetable Text was picked up by The Neiman Journalism Lab at Harvard, leading to a spike in interest and installs. I had missed a big chance to get this to the next level.
Another project — I created a small GMail timer plugin and popped it in the Chrome App Store. Nobody seemed to care. At some point, a user discovered a bug and a workaround, and posted it in a review. At that point, people could actually use the thing — and the user base started to grow steadily. A few years later, I find my MVP has grown to around 700 users.
In both cases, imposing short-term success criteria of the experiment was a mistake. When nobody responded right away, I took it as a lack of interest in the product, but that wasn’t the right conclusion.
Channels like app stores often take time to build up steam. If we’re going to run experiments that use them as a proxy of customer interest, we have to give them time. It’s prudent to make sure our success criteria takes into account a reasonable lag time, congruent with the channel we’re testing.
How much time is reasonable? Well, to know that, we’ve got to research what others have done before us — an easy way to do that is reach out and ask!
Waiting for “the pop”
A common success story is waiting for the right moment, the hockey stick, the spike, the pop. But we can’t wait forever; there’s a growing opportunity cost to stick to something that’s not clicking. And parking it often comes with it’s own problems — in cases like mine, we often don’t have the capacity to deal with the opportunity when it finally comes. I’ve noticed a correlation between good time-managers and jumping on opportunities here. In order to leverage contingencies, we need to make sure we are watching and able to jump.
Choosing faster experiments
Given that lag time is sometimes long or unpredictable, we might choose to run experiments in a faster channel. No experiment is ever perfect, but considering our choice of experiments, comparing what each can tell us, and by when, often gives us clarity.
Portfolios rather than single-track iteration
Having learned this lesson, I realise now that it’s one of the strengths of the labs model, where a number of projects run in parallel.
I write about startup decisions. If you’d like to read more, you can subscribe here.