Just add ICE, or how we at Pipedrive Growth team set priorities

Aivar Ots
Aivar Ots
Apr 11, 2017 · 5 min read

Almost a year ago, I started working as a Growth Product Manager at Pipedrive. At Pipedrive, the Growth Engineering team is part of the marketing team. Our goal is to build acquisition channels that generate signups over a longer period, compared to channels like paid media that stop delivering results as soon as a campaign ends.

The amazing team there had already started the first growth experiment — building a free web forms feature. Web forms is a simple tool to capture leads from your website. We did not want to build it as a full product feature — there are already so many good form builders out there that could be integrated with Pipedrive that we’d simply be adding to an already well-served sector. As a growth project though, it made total sense. We could build a simple tool to satisfy the needs of most small businesses and give it out free in exchange for presenting our logo on the submit page.

Image for post
Image for post

This project took the team five months to complete and turned out to be a failure. Well, not a complete failure, as it’s a highly popular feature, but it did not do what we hoped it would do i.e. drive new signups.


We needed to improve in many areas and it seemed that the first thing to tackle was the way we prioritized our experiments list. How do you get the team to work on experiments that give the highest return for the least amount of effort? It used to be that we’d gather all stakeholders and cast a popular vote. But this does not work. The experiments list contain items with different impact size — some target existing customers, some new customers. All stakeholders have different experiences and levels of confidence. And as the web forms project showed — some experiments could take up to 5 months to finish, so something that resource heavy had better work.

After doing some research we decided to move on by using our own version of the ICE framework for prioritizing the backlog. It is fast, simple, easily understandable and has a catchy name. It offers the right amount of evaluation one needs in a fast-changing world, as the inputs for every experiment change faster than it would take to finish in-depth research. ICE consists of three factors to consider when evaluating experiments: impact, confidence, and effort. Some others, like BRASS or PIES, can be found and I wouldn’t say they are worse or better.

It is about picking one that works initially and improving it on the go.

Using the ICE framework for prioritization starts by listing your ideas and adding a high-level description of the project/experiment/test. Add a hypothesis or metrics next to the idea and describe how you would define success. Now you can begin applying ICE.

For every project/experiment/test think of:

  • Impact. Evaluate the metric that you are targeting. Will it have major effects on your overall goal? In our case, we’re looking for signups. We think of the audience we aim to impact. What could be the share we could grab from it? How fast would it grow after we implemented the experiment?

Every part is rated on 5 point scale, 1 to 5. For impact and confidence use 1 for low impact and 5 for high impact; for effort, the scale is vice versa — 1 for high effort and 5 for low effort. We chose the 1 to 5 scale because:

  • Having less than five different values made ICE scores very similar. You end up having multiple projects with the same score.

Multiplying all three factors will give an ICE score between 1 to 125 with a higher score indicating high impact, low effort projects.

Avoid the trap of going into too much detail.

It will be difficult at the start, but eventually adding ICE values should not take more than 5 minutes per project. Yes, you will make mistakes and there will be vagueness in the scores. This is OK. Getting stuff done is the key point to learning, improving your prioritization inputs and eventually finding your growth areas. I’m constantly updating the ICE table based on project execution time and results and also updating impact-confidence whenever new ideas are put in.

Having used ICE for a year now it’s clear that you will work on projects that fail, it’s not 100% correct, and it will not show you where the basket with the golden nuggets is. But it will give you more confidence on choosing where to apply your time and effort. It will also help to get buy-in from the team and the stakeholders, as you will show a clear and calculated view on your priorities. This is far better than doing multiple pages of project research and tons of powerpoint presentations and still ending up arguing around everyone’s personal opinion on which factor was left out of the research.


By now we have implemented ICE throughout our marketing organization and we use it to prioritize growth experiments, marketing activities and web optimization tests. Every team has added a few of their own tweaks to get the best of it, but the basis is the same. With the basics in place, it’s easy to communicate priorities within the team or to outside stakeholders.

If you’re interested in using ICE then here’s a link to Google sheets template and sample ICE table.

Image for post
Image for post
ICE table

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store