The pre-commitment mechanism for accountability in Lean Startup
Pre-commitment means deciding ahead of time what you’ll do if the results of a lean experiment turn out one way or the other. It was one of the most effective tools I used this summer to move five startup teams from ideas to revenue in ten weeks. In this post I’m going to dig deeper into the actual mechanism involved.
The start — figuring out what you want to learn
Lean Startup and experimentation is a powerful way to determine if your startup’s assumptions are true or false. The standard cycle lean startup describes is build-measure-learn. It means build an experiment, measure the results and learn from them. Here’s how it worked for the University of Oklahoma summer startup accelerator.
Icarus Aerial Technologies spent the summer investigating potential uses for drone photography. One key issue they needed to figure out early on was which customer segment to focus on. They had heard from a variety of friends and acquaintances that certain industries, like farming and real estate, would love the opportunity to use their tech. That is an assumption that needed to be tested!
The first thing Icarus wanted to learn was whether farmers would be interested in the drone’s capabilities. It’s critical that the learning step (the hypothesis) is framed as a yes or no question, so it can be validated (farmers are interested) or invalidated (farmers are not interested).
The experiment — how you’re going to test your assumption
Icarus then needed to figure out how to test the hypothesis. There are a few simple ways to test if someone is interested in your product and my favorite is the pre-sell, or “Sell and Scramble”. Lean startup is heavily focused on not building products or services that people aren’t interested in. So a good experiment will test demand before building anything.
In this case, the experiment was for Icarus to approach 10–20 farmers, primarily at farmers’ markets, and try to get just one of them to invite the team out for a test flight. No money needed to be exchanged. Instead, Icarus was looking for a verbal commitment that they could fly over one farmer’s fields. If Icarus could find one, they would consider the test passed, or validated. If not, invalidated. This is a pre-determined indicator of success.
Icarus then needed to pre-commit to what validating or invalidating the test meant. In other words, regardless of the outcome of the test, what would they do next. This is very important because frequently a team will setup a test, invalidate the test, and then continue on anyway because they felt the test didn’t really do what they wanted. This is a waste of time; in the summer accelerator’s case, a week out of ten weeks. If the results of the test won’t change the startup’s mind about an assumption they hold, or it won’t change the next thing they want to do, there isn’t any point in running the test.
I want to highlight that running tests is definitely not about throwing ideas against the wall and seeing what sticks. Tests are designed to learn certain things (like if a certain customer segment is interested in a product) and to validate assumptions and hypotheses.
For Icarus, if they couldn’t find one farmer out of twenty to invite them out to do a test run, it would mean that farmers were not a viable customer segment at this particular moment in history. Icarus decided that if they invalidated the experiment, they would move onto a different customer segment, probably real estate. If they validated the experiment, they would dig deeper into the farmers’ needs who invited them to fly.
Running the experiment
This part is the most conceptually easy (just do it!) but also the most difficult. Because everything up till now took place in the safe confines of the classroom, on the whiteboard, in discussions and in our imagination. But experiments always happen in the real world, where I can guarantee that things won’t go as easily or smoothly as planned.
For Icarus, they faced the gamut of responses from the farmers they talked to. Some ignored them, others yelled at them, and still others chatted but immediately shot down their ideas. All in all, it was quite clear that the people Icarus spent the week talking to were not interested in the idea.
Another key point happened here. Icarus did not go ask people what they thought about the idea in the abstract. They tried to actually sell the service to people. They asked for firm commitments in cash and time. And they were pretty firmly shot down.
Looking at the results
Which is actually great! It became pretty clear to Icarus pretty fast that the local farmers they were speaking with were not interested in their service. After spending many months doing business planning in a classroom, coming up with a “sure-fire” idea, one that was already working in other parts of the world, they came up with an immediate no as soon as they approached some customers. That’s why I like actually trying to sell your idea to real people as soon as possible with this experimental approach. It means you don’t need to waste any more time once an assumption gets invalidated! This is good news, because it means you can spend your valuable time and resources chasing down another lead.
Adding additional insights
Validating or invalidating an assumption or hypothesis is the main point of running an experiment. But it isn’t the only one. I also asked all the startup teams to share their insights from the experiment. Insights are hard to capture broadly but the often arise from the conversations the teams had with their potential customers after they were rejected. Many startups find their real business after pitching one idea and then asking one of my favorite questions after getting turned down “What frustrations do you have that you wish I could build a solution for?”
Presenting experiments in class
Each week the startup teams came up with new experiments to run, just like the one Icarus ran with the farmers. In class the teams would present their experiments for the week on Tuesday morning. They would state very clearly:
- The hypothesis as a yes/no question
- The test they would run to validate/invalidate the hypothesis
- What success or failure looks like
- What they would do next in the case of failure, indicating what failure (invalidation) really meant
The following Tuesday we started class by opening with a presentation of the results of the prior weeks experiments. The teams presented:
- Hypothesis: Here’s what we thought
- Experiment: Here’s what we did to test that thought
- Results: Here’s what we found out (the real data!)
- Insights: Here’s what else we learned
Accountability & Agile
The purpose of setting up these public experiments in class is to keep everyone accountable to each other. It is hard to slack off when you know you will be accountable for what you publicly said you would do (see the book Influence for a lot more on that). What ended up happening though is that the teams found themselves feeling accountable to me personally, rather than to the group as a whole. This works well as long as they are in the accelerator! But I wanted them to feel more accountable to each other during the summer as well as after, when the structure of the accelerator was removed.
What I would like to do next time is to have the teams provide more critique of each other’s experiments during the summer itself. I just read the Five Dysfunctions of a Team and accountability to the team as a whole is one key element. The teams are continuing to meet bi-weekly after the accelerator and are planning on holding each other accountable. For example: https://www.youtube.com/watch?v=St2BhIbOLUs.
One last point I want to close on is using a weekly Agile/Scrumm sprint to structure the accelerator program. The purpose of a sprint is to pre-define work that needs to be done and allow teams to manage themselves. It also sets clear goals as to when the works needs to be finished by. That structure allowed the teams to stay on weekly sprints, biting off small pieces of work (in the form of experiments) as we went along, rather than worrying about a bigger picture that is outside their control. It also allowed the teams to make steady progress each week, which is critical for startups in time-crunched situations.
Originally published at ericmorrow.com on August 2, 2014.