How to Build Teamwork of a Growth Team: My Seven Insights

GrowthBoard
6 min readMay 10, 2017

--

This article is actually for startups with satisfied clients and traction, but which want more: More clients, more revenue, more growth. There will be mistakes, problems, and solutions, on the way to build your Growth Team. Teamwork based on constant testing of growth hypotheses is hard work and entails tuning the whole team for several months. Our experience formed the basis for an absolutely new growth management tool. GrowthBoard is a data-driven task manager for marketers and Growth Teams. But let’s start from our problems and solutions.

1. What is a Hypothesis?

We’ve been arguing for a long time about what a hypothesis is. Are improvements in the acquisition channel a growth hypothesis? And what if you need only to correct the semantic typos in the ads? The growth won’t be perceptible, but you cannot leave these typos. Where does the border lie between a simple task and a growth hypothesis?

Finally, we decided that the presence of the expected quantitative improvement in metrics makes task a growth hypothesis. We agreed to formulate hypotheses in the form of:

If <action> then <metrics change>

If you write down the hypotheses on the stickers, then you can separate the sticker by a horizontal line and note the actions at the top of this sticker and the expected result below.

We got this insight a bit later when we began to attach stickers with hypotheses and tasks to the same board. It turned out the hypotheses dragged the project forward, and the tasks were responsible for a routine that prevented roll back. Surprisingly, we learned how to regulate the growth rate. If we see two growth hypotheses, and seven tasks on the task board, then this is a message to us that we are not paying enough attention to the growth. When it became visible, we stopped deceiving ourselves.

2. Hypothesis Tracking

Hypothesis testing proved to be a very complex process. Some hypotheses could be tested for several days, and others for weeks. Sometimes we discussed the test result and got insights, and sometimes we didn’t collect data on the hypothesis at all.

There was no single process until we tried to use Scrum. There were noticeable improvements in teamwork on the second sprint. From Scrum we held Sprint Planning at the same time every week, Daily Meetings for 15 minutes every morning, and a Sprint Review before Sprint Planning. Not all problems were solved, but it was enough not to undertake too large hypotheses, have time to test what was planned, and discuss the result for each hypothesis.

Another huge help was the Kanban Board with hypotheses. We place it in the meeting rooms. For us, the following stages of the hypotheses turned to be optimal: Ideas, In progress, Validation, Insights.

We put all the ideas that came up during the growth meeting in the first column. Hypotheses in progress moved to the second column. After the hypothesis had been executed, it moved to the third column to collect the data. After the data had been collected, the hypothesis fell into the fourth column and the team had to note the insight.

3. Tasks Execution

A board with hypotheses, available for all members of a team, has strongly ordered the work with hypotheses. Our weekly meetings began to be more productive, but we found that there was another deeper problem. Most hypotheses contained several subtasks for different members. On the board with hypotheses, there was no place for such subtasks. Moreover, there were always other ongoing tasks. And, as a result, it was unclear how to manage the execution of tasks. We solved this problem by setting up a board with tasks with the following sections.

The board was hung in the open space where the team works. Hypotheses from the second column of the Hypotheses Board were duplicated and placed in the first column of the Task Board. Opposite to each hypothesis, there were tasks relevant only to it. Under the hypotheses was the Backlog of tasks.

The tasks of the hypotheses and the routine tasks of the Backlog were on the same board. It once again clearly showed how fast we were moving forward, or whether we were trying not to roll back.

4. Who is Responsible for the Measurements?

Moving on in our improvements, we found a new problem — data collection. Measurements were not always made. Often they were launched with delay or errors. For example, there were errors in the goals or UTMs. However, data collection is critical in the Build-Measure-Learn process.

The solution found was very simple. We introduced the role of project analyst and assigned to it an employee who is most experienced in web analytics. A project analyst was responsible for the correct collection of data on all hypotheses. Before moving the hypothesis to the third column, it was verified how the data was collected. Finally, the project analyst reported about hypotheses results on the Sprint Review.

Insight: No one is responsible for the analytics if everyone is responsible for it. The role of a project analyst should be present in every Growth Team.

5. Information Gap

Assigning the role of analyst project, we solved some problems but other problems were merged. The analysts were distracted by the employees when they needed the data. This was especially inconvenient because an analyst combined the analyst’s duties with marketing. The team tended to clarify several times a day: ‘What about the results of hypotheses testing?’

We quickly solved this problem by setting the monitor with the real-time analytics data in the common room. The analyst showed all current experiments and key parameters on the screen. It fit perfectly into the concept of Shared Understandings when the whole team were in a single information space.

6. Measurements are Deceiving

We realized that the metrics growth in the successfully tested hypotheses could return to the same mark, sometimes even on the next sprint. And then we asked our analyst to dig more carefully.

The result surprised and disappointed us a lot. It turned out that most of the tested experiments had been showing the improvements within the statistic uncertainty band, and we couldn’t rely on them.

Insight: We should think about the statistical significance of the experiment before it starts. The expected goal of growth can be a static mistake or just a chance occurrence.

It turns out that, very often, we tested conversion improvements of the client acquisition channel where there were too few users. Or we chose the channel which was not wide enough for the testing. Yes, this is a very delicate work with a calculator, and requires a very specific knowledge of mathematical statistics, but this might help us not spend our time making useless hypotheses.

7. Constant improvements

As you can see, we had been seeking and eliminating our weak spots constantly. Another Scrum’s concept helped us find a solution. This concept is called the Sprint Retrospective. At the end of each sprint we asked four simple questions and sought the answers:

  1. What went well?
  2. What did not go so well?
  3. What have I learned?
  4. What still puzzles me?

The results of our continuous improvement transformed into the GrowthBoard. The tool for managing the growth, it was designed specifically for startups with traction, Growth Teams and marketing departments. Check out the proven solution now and don’t replicate our mistakes. Happy growth!

Yuri Drogan, CEO GrowthBoard

And now it’s time to say a few words about GrowthBoard.

--

--