Just add ICE, or how we at Pipedrive Growth team set priorities

Almost a year ago, I started working as a Growth Product Manager at Pipedrive. At Pipedrive, the Growth Engineering team is part of the marketing team. Our goal is to build acquisition channels that generate signups over a longer period, compared to channels like paid media that stop delivering results as soon as a campaign ends.

The amazing team there had already started the first growth experiment — building a free web forms feature. Web forms is a simple tool to capture leads from your website. We did not want to build it as a full product feature — there are already so many good form builders out there that could be integrated with Pipedrive that we’d simply be adding to an already well-served sector. As a growth project though, it made total sense. We could build a simple tool to satisfy the needs of most small businesses and give it out free in exchange for presenting our logo on the submit page.

This project took the team five months to complete and turned out to be a failure. Well, not a complete failure, as it’s a highly popular feature, but it did not do what we hoped it would do i.e. drive new signups.


We needed to improve in many areas and it seemed that the first thing to tackle was the way we prioritized our experiments list. How do you get the team to work on experiments that give the highest return for the least amount of effort? It used to be that we’d gather all stakeholders and cast a popular vote. But this does not work. The experiments list contain items with different impact size — some target existing customers, some new customers. All stakeholders have different experiences and levels of confidence. And as the web forms project showed — some experiments could take up to 5 months to finish, so something that resource heavy had better work.

After doing some research we decided to move on by using our own version of the ICE framework for prioritizing the backlog. It is fast, simple, easily understandable and has a catchy name. It offers the right amount of evaluation one needs in a fast-changing world, as the inputs for every experiment change faster than it would take to finish in-depth research. ICE consists of three factors to consider when evaluating experiments: impact, confidence, and effort. Some others, like BRASS or PIES, can be found and I wouldn’t say they are worse or better.

It is about picking one that works initially and improving it on the go.

Using the ICE framework for prioritization starts by listing your ideas and adding a high-level description of the project/experiment/test. Add a hypothesis or metrics next to the idea and describe how you would define success. Now you can begin applying ICE.

For every project/experiment/test think of:

  • Impact. Evaluate the metric that you are targeting. Will it have major effects on your overall goal? In our case, we’re looking for signups. We think of the audience we aim to impact. What could be the share we could grab from it? How fast would it grow after we implemented the experiment?
  • Confidence. Indicate your internal belief that the project will be successful. Talk to people who have expertise. Search for other similar examples. Do you have any data that makes you believe it will work? Has some user pointed it out or was it in some expert blog last week?
  • Effort. Evaluate the resources you need to get the project done. This includes time to prepare inputs, the cost of needed assets, and development time. Try to think of possible future expenses. Get the development team to participate by asking them to line up projects from easiest to most difficult. Don’t forget to add your own time.

Every part is rated on 5 point scale, 1 to 5. For impact and confidence use 1 for low impact and 5 for high impact; for effort, the scale is vice versa — 1 for high effort and 5 for low effort. We chose the 1 to 5 scale because:

  • Having less than five different values made ICE scores very similar. You end up having multiple projects with the same score.
  • Having more than a 5 point scale made evaluating each aspect more time-consuming.
  • It’s difficult to put down a clear value if you have too many close options.
  • Setting a score between 1 and 5 is way easier than setting one between 1 and 10.

Multiplying all three factors will give an ICE score between 1 to 125 with a higher score indicating high impact, low effort projects.

Avoid the trap of going into too much detail.

It will be difficult at the start, but eventually adding ICE values should not take more than 5 minutes per project. Yes, you will make mistakes and there will be vagueness in the scores. This is OK. Getting stuff done is the key point to learning, improving your prioritization inputs and eventually finding your growth areas. I’m constantly updating the ICE table based on project execution time and results and also updating impact-confidence whenever new ideas are put in.

Having used ICE for a year now it’s clear that you will work on projects that fail, it’s not 100% correct, and it will not show you where the basket with the golden nuggets is. But it will give you more confidence on choosing where to apply your time and effort. It will also help to get buy-in from the team and the stakeholders, as you will show a clear and calculated view on your priorities. This is far better than doing multiple pages of project research and tons of powerpoint presentations and still ending up arguing around everyone’s personal opinion on which factor was left out of the research.


By now we have implemented ICE throughout our marketing organization and we use it to prioritize growth experiments, marketing activities and web optimization tests. Every team has added a few of their own tweaks to get the best of it, but the basis is the same. With the basics in place, it’s easy to communicate priorities within the team or to outside stakeholders.

If you’re interested in using ICE then here’s a link to Google sheets template and sample ICE table.

ICE table