Designing Experiments for Growth

Illustration: Carlos Rocafort IV

At Evernote, the Growth Team is a critical function that works to increase user engagement and ultimately monetization. To do so, the team leverages quantitative data and qualitative research to identify and connect areas of opportunity to specific initiatives. Then, we design, implement, run and iterate on experiments to explore and validate assumptions with regards to how well they move us towards our business goals.

For me personally, my tenure at Evernote began on the Growth Team. Though I have since transitioned from this team to another, I was fortunate to have worked with such an intelligent group of team members and on a vast range of experiments. I was also lucky to have picked up invaluable insights around the growth design process and practice. Here are some of them.


Start with a Hypothesis

With any experiment, it’s important to establish a clear hypothesis. Doing so will bring clarity to what you’re solving for and ensure that team members understand the objective at hand. To that extent, it’s paramount that hypotheses are supported by some level of directional evidence. If you invest in processes that enable your team to justify and prioritize experiments, your team can stay focused and move faster in the long run.

Know Your Metrics

Before you dive into any design details, you should ensure that your team collectively understands what metrics will be used to measure the success of a hypothesis. For designers, it can prove to be quite helpful to understand which metrics you’re associating with which outcomes when you actually explore. Additionally, teams should challenge themselves to ensure that their metrics aren’t superficial, as what you track can be biased.

Aside from quantitative set up, it can be quite helpful to look into user feedback and qualitative research. Often times, your best directional evidence comes directly from those who use your product.

Go Broad to Go Narrow

One of the practices I actively incorporate in my process today was derived from my experiences on the Growth Team. Specifically, I believe going broad in exploration allows you to identify a wide range of potentially impactful designs and helps you understand what your experiment might look like (e.g. a traditional A/B test or a multivariate test.) Additionally, you gather a more holistic view of how your experiment might actually fit into a user’s journey, as you can compare experiences. If your team has a clear sense of the goals and how much engineering effort they want to put into the experiment, narrowing down the options is a simple process of prioritization and elimination.

The practice of going broad has also helped me detach myself from my designs because so much of my work goes unused. To that extent, there will be occasions when you explore an idea that may be worth implementing once you’ve validated the initial hypothesis further. In those instances, it’s important to document your expectations around the experiment and what the next steps might be once you’ve received results.

Invest in Copy

During my time on the Growth Team, I was able to learn more about the skill of crafting copy and to observe the direct impact of in-app communication.

For example, we ran an experiment wherein new users would enter the product with a welcome note. With the right content, we believed this would help more users understand what they could put in a note, thereby, leading to more note creates. As an added layer to this experiment, we believed that the note title could impact how many users decided to read the note. As such, we tested 3 different titles and saw a significant increase in readership for users who received a welcome note titled ‘The Power of the Note’.

Don’t Neglect the User

As the goal of experimentation is to learn, designers should aim to create simple and minimal experiences that help the team accrue necessary learnings. To that extent, designers are also responsible for balancing business goals with user needs. If experiments feel isolated from the rest of the product experience or have employed dark patterns, we’ve failed to design for the user.

One helpful tool in combatting poor user experiences in growth design is to think about minimum viable experiences (as opposed to minimum viable products.) Setting a threshold around how a user should feel during an experiment can help push out less ideal experiences. In addition, developing principles around the experimentation framework may help your team uphold their promise to users.

Analyze Data as a Team

Formalizing when teams go through experiments and their results provides an opportunity to celebrate learnings and to discuss why the data came out as they did. It also holds teams accountable for their releases and helps surface the status of experiments.

Iterate on Experiments

Once an experiment returns data, it can be easy for teams to move onto the next initiative — especially if it was successful. However, it’s important to recognize that there are reasons why an experiment may have failed or that there are still many opportunities to improve an experiment that was successful.


Working on a Growth Team is truly an invaluable experience. There are so many interesting lessons to learn regarding process and practice that can be applied to both growth initiatives as well as to one’s core design craft. With that, cheers to growing!