Inside Product: Introduction to Feature Priority using ICE (Impact, Confidence, Ease) and GIST (Goals, Ideas, Step-Projects and Tasks)

Nimay Parekh
8 min readMay 3, 2018

--

Nimay Parekh

When considering product rollouts, there are literally a dozen frameworks that one can consider when isolating priority within qualitative and quantitative data. Frameworks vary by stage of business, product type, customer type, willingness to pay and accessibility to data. Coined and popularized by Sean Ellis, the ICE framework may be the most popular useful framework when scaling and optimizing on shipping time, developer ROI, user retention and funnel maximization.

What is ICE?

ICE is a combined evaluation of features based on three levers: Impact, Confidence and Ease. Impact is an estimation of the optimization of the effort, confidence is an estimation of the thoroughness of your hypothesis and ease is an estimation of the amount of effort by time that will be required to ship the feature/ product. Each lever is graded on a scale of 1–10 and the feature worthiness is evaluated by multiplying each lever’s score.

ICE Score = (Impact +Confidence +Ease) / 3

Let’s understand this with an example:

As a startup, you are predisposed to limited resources. Beyond the limits of frugality, you are trying to maximize the time of each developer. As a product manager, your ability to unblock blockages is your northstar defense to unlock developer, designer, sales, marketing and customer success offense. When mapping your product backlog against your customer funnel and total request size, your ICE score allows you to determine whether the following effort is warranted in the current sprint cycle. You can also recaliberate velocities based on ICE scores to adjust your current sprint backlog.

(Source: Growth Hackers)

In fact, the ICE scoring matrix can impact your entire product roadmap, provisionally providing confidence to your stakeholders in the timeline of major releases (i.e. X.0s). However, when attaching ICE scores, product managers can be flippant. One way to hack the integrity of the ICE score is to collectively survey those affected by the feature request. This would include developers and customers. However, this also requires educating all stakeholders on what you are trying to unlock with each estimation in order to be thorough about its measurement. Lets take a deeper dive into what good ICE estimation entails.

Impact

The best way to measure impact is to understand how does it move a user/ customer across the funnel AARRR. AARRR (Acquisition, Activation, Retention, Referral, Revenue) describes the five stages of converting a customer and maximizing ROI. Each stage has a series of drivers and metrics that allow product managers to quantitatively define and measure the impact of their efforts.

(Source: Apptentive)

Shipping efforts typically optimize for movement across the funnel. However, impact also unlocks value in improving customer stickiness within stages. Value, if possible, should be communicated in dollar terms in order to ensure that the roadmap to revenue is greater than the customer acquisition cost. The extrapolation of estimated value into revenue can also unlock the customer lifetime value.

(Source: Matty Ford)

Via reverse engineering, product managers would assign the highest impact scores on a scale of 1–10 to those features with the highest revenues and/ or highest LTV : CAC ratios. Minimum LTV : CAC optimizations are at a 3:1 measurement.

Confidence

Confidence, also measured on a scale from 1–10, is a measure of the conviction required that warrants the feature build out. Heavily weighed and biased towards experiments and customer evidence, confidence eliminates the anecdotal and subjective biases in feature priorities.

(Source: Itamar Gilad)

Other forms of user evidence include customer product road map reviews and end customer (B2B2C) feature/ service request tickets. In very rare instances between split decisions, impasses or tight release deadlines, product manager convictions and competitor releases can outweigh the time to aggregate data acquired from longitudinal user studies, alpha results and user interviews. An excellent retrospective method to measure the success of product manager convictions would be effects of features releases on end customer NPS scores.

Ease

Ease is usually measured in time to determine how much work will be required for the feature to be shipped. Different companies have different methodologies to measure the ROI of their DevOps. For example, Manoj Chaudhary, CTO and VP of engineering at Loggly, uses the following ROI to measure the success of his DevOps’ team.

  1. Release frequency: How fast code releases to production.
  2. Infrastructure recovery: How fast production recovers from “fires”
  3. Infrastructure resiliency: Reduction in downtime/problems with infrastructure.
  4. Infrastructure efficiency: How well resources are shielded from production deployment problems.
  5. Automation: Reduction in human intervention to fix problems.

A more standardized methodology for product managers as opposed to CTOs/ VPs of Engineering is to measure developer ROI in terms of revenue and cost.

Developer ROI = (ARPU * Number of customers activated) / (Cost per developer per time block * Productivity per time block * Number of time blocks)

Integrating Manoj’s framework, the five aforementioned levers can be a combinatorial methodology of measuring productivity along with developer focus on feature request and number of blockages mitigated by the responsible product manager.

For the sake of simplicity, bringing developer ROI back to time, ease can be measured by number of weeks. Specifically, feature requests derive higher ease scores if they can be completed within a single sprint cycle. An important note is to never conflate developer time with story points. Story points are designed to reflect priority and maybe independent to time to ship said feature.

(Source: Itamar Gilad)

Product managers can measure their ability to forecast ease scores by reflecting on feature convictions with sprint burndown charts.

(Source: Scrum Institute)

Inaccuracies in reconciliation may entail (1) Understanding the blockages that led to drops in speed (2) Optimizing the overall team productivity structure (including hiring) or (3) Revisiting the developer time — ease matrix to rethink the length of sprint cycles and methodology to forecast.

Via ICE, we now have a prioritized icebox, product backlog and sprint backlog. What does this mean in terms of a product roadmap that has the shipping and forecasting integrity required to see features through the next 3, 6 and 12 months? Introducing GIST (Goals, Ideas, Step Projects and Tasks). Gantt charts and product roadmaps prescribed by multiple product planning softwares are considerably outdated for a few different reasons: (1) They employ a more waterfall versus agile approach (2) They employ a more project versus product management approach and (3) They employ a more top-down versus 360 approach.

What is GIST?

(Source: Itamar Gilad)

Goals are an aggregation of a product manager’s OKRs (Objectives and Key Results). They involve a top down systematic cadence of forecasting success of a particular feature, product, product manager and product leader by year and quarter. Goals lead to the development of OKRs (akin to if not synonymous to ideas) to roadmap the development of expected success.

(Source: Atiim)

Ideas involve activities or objectives that will lead to desired results. At a macro-level they can be broken down by function and individual into an OKR tree.

(Source: Weekdone)

At the product level, each goal can be an aggregation or single key results with ideas/ objectives/ milestones that are (1) quantifiable (2) actionable (3) flexible and (4) derivative i.e. can be further broken down into tasks or rolled up into goals.

(Source: Betterworks)

Best in class GIST software with OKR capabilities seamlessly update the performance of objectives and key results to understand the performance of the product professional in real time. These allow for more prompt actions towards performance optimization measures, including creating product performance leaderboards.

(Source: Weekdone)

Step Projects allow for the breakup of objective(s) into 10 week long or 5 week sprint cycle projects. Step projects are useful in shipping projects to meet quarterly targets, including visualization of parallel and sequential step projects that allow for small and large product step changes.

(Source: Itamar Gilad)

For example, in the event of a large scale project, each interval in the image above represents a step project that leads to the completion of the overall project. Each product professional would breakup their individual products in a step project manner. Each group product leader would view the series of projects and step projects in parallel to get a full view of blockages, constraints and resource performances in the event of shared resources across teams. For example, DevOps or UX designers maybe over leveraged in certain step projects which may affect the unison delivery of all concerned features and products.

Tasks are bite-sized versions of step projects. They are stories with story points and velocities introduced across sprint cycles. They are adjusted daily by the responsible product manager. Tasks are where ICE scores are introduced. Each task will have an ICE score that will determine its rank in shipping priority. Step projects and ideas will also adjust based on the ICE task score aggregate. Goals are typically top down and may not be impacted by ICE scores, albeit it may serve as a status update.

GIST and ICE can truly serve as leading performance indicators with better precision and insight into the activities of product teams. With the appropriate institutionalized modus operandi of planning internal communication, external communication and product strategy, GIST and ICE can replace traditional product roadmapping and charting within customer success and product centric organizations.

--

--