How Product Leaders Can Evaluate Features Based on Business Impact

Travis Kaufman
Agile Insider
Published in
7 min readJul 24, 2019
Image by Free-Photos from Pixabay

The controversy

This sounds like a no brainer, right? No product leaders in this day and age are not thinking about how their product investments impact the business. Measuring this at the product level is quite trivial. For enterprise SaaS companies, as long as you have an SKU and your sales team are actively selling your product you simply need to look at how much revenue this product is generating, customer retention rate and lifetime value.

The challenge is that not all product investments have a one-to-one mapping to an SKU. This causes an unclear association with business metrics. In fact, most decisions product managers make, occur in the trenches of the product on a feature by feature basis.

Some product practitioners dismiss this topic all together insisting that product feature success should be measured based on user engagement alone.

In my 20+ years experience building SaaS products for the enterprise, it’s been clear to me that engagement alone is not enough. Earlier in my career, I released a new offering that was a free add-on feature to one of our existing products. We tracked adoption and it was off the charts as one of the fastest adopted features in our company’s history. I was ecstatic. The offering was serving an unmet demand and customers couldn’t get enough.

I continued to share out release success in terms of the number of customers using the feature and my CEO at the time responded with “I don’t care… I want to know which customers achieved success.” I was devastated. It took my ego some time to realize what my CEO said was absolutely correct. Adoption alone told us that the feature addressed a need, but what was missing was how that feature contributed to the success of our customers.

How to Evaluate Feature Level Success?

Since that experience, I’ve participated in and led a number of product release review meetings and they were all disconnected from the impact the feature had on the customer. Some of these meetings provoked really interesting questions like “should we continue to invest in this feature? Or kill it and move on?” The information we had available was never enough for us to know if this was an idea we should pursue and so we would kick off lengthy one-off analysis. This type of analysis wasn’t something we could do as often as necessary resulting in many decisions being made mostly on intuition.

After speaking with a number of colleagues on the subject, it had become clear that there was no consistent practice for evaluating feature level product investments. When asked about the introduction of a new feature, each product leader agreed that the decision they made was correct. They each based it on something different. Some based their decision on key accounts insisting the feature be developed. Their criteria of success were based upon delivery of features for key accounts based on what they asked for. Measuring success this way only measured the ability for the team to ship new features and was not a meaningful indicator that could be used to make any sort of forward-looking decision on where to invest.

Others based success on influence over an engagement based metric or “north star.” This felt closer to business impact, but it doesn’t take into account other unknown factors which may also be influencing the metric.

Combining Qualitative & Quantitative Measurements

What I found worked best was the combination of quantitative and qualitative measurements to indicate feature success. The feature needed to be used by customers and those customers has a revenue impact on our business. So for the quantitative measures, what I like to do is look at usage in order to measure revenue influence. For example, if a feature were used by 200 customers and the combined revenue of those customers equaled $500,000. I would consider this feature to be influencing $500,000 worth of business.

This view gives me an understanding of both adoption and revenue impact of a given feature. Now, these two measures are good, but not enough. I also need the qualitative measurements in the form of knowing if the customers found the feature valuable and if they found it easy to use. With these combined data points, I can now start to form and recommendation.

For example; if I were to remove this feature, I can quickly know that it may impact 200 customers and up to $500,000 in revenue. However, the revenue is at risk only if the customer finds value in the feature. If a large percentage of customers find the value to be low, then it’s unlikely those 200 customers would leave if this feature were no longer offered and so the risk to revenue is low.

In the event a large percentage of users do find the feature valuable, but they find it difficult to use — then I run the risk of them looking for an alternative solution and the revenue is potentially at risk.

4 Metrics for Measuring Product Feature Success

The combination of these metrics gives me a good view of the success of a given feature.

  1. Usage: Low usage indicates an operational issue in my product release process. Perhaps we’re releasing too much too often, or simply failing to properly raise awareness. Usage is best calculated in terms of % of intended users. For example, consider a feature that is designed for your power users. Knowing power users make up a small fraction of your overall user base, you wouldn’t consider this release to be a failure just because it only reached 2% of your total users.
  2. Revenue Influence: This allows you to connect product usage with business impact. Revenue influence gives you a common denominator in which to compare future product investments. Likewise, if you are deprecating a feature, you’ll do so knowing the potential revenue risk.
  3. Perceived Value: Asking your customers to rate the value of a given feature allows you to know the difference between the must-have and the nice to have features of your product. Simply ask your customers, “on a scale from (1 not valuable — 5 very valuable), how valuable is this feature.” If they answer with low scores, you have not built the right feature, or in a way that materially addresses their challenge.
  4. Perceived Effort: Asking your customers to rate the ease of use of a given feature, will tell you if there is room to improve. Simply ask your customers, “on a scale from (1 easy — 5 difficult), how easy was this feature to use.” If the scores are high, it indicates a poor user experience.

Examples of How to Use These Metrics

Let’s run through a couple of scenarios and see how these metrics guide our decision.

1. Feature A:

a. Usage is high

b. Revenue is low

c. Value is high

d. Effort is high

Recommendation: Do not invest more at this time. Adoption is high, indicating you reached your intended audience. They find the feature valuable and unfortunately difficult to use. Because the revenue impact is low, it looks like making future investments here would have a minimal impact.

2. Feature B:

a. Usage is low

b. Revenue is low

c. Value is high

d. Effort is low

Recommendation: Continue promotion of the feature before further development. Since those that used the feature found high value it indicates you had built the correct feature. Revenue will be low in most cases where usage is low, so it doesn’t factor into the decision yet. When usage is low, it’s difficult to consider investing in further development.

Each of the elements above is anchored around your customer’s experience with your product and it’s potential revenue impact. An additional factor to consider is also the hard costs it took to create and maintain this offering. There are a number of articles written about the calculation of this cost (see article x… .y… z…).

A simple method is to calculate the amount of time spent developing the feature set in terms # of developers x average cost per developer. In terms of calculating the maintenance costs of a given feature set is to measure the # of customer raised issues that must be escalated to your development team to resolve.

Companies can easily go overboard on cost calculations. Attempting to calculate time spent on every customer ticket submitted along with the pay scale for the people involved in the resolution. For companies just getting started with product investment measurement, keep in mind that starting with a simple method is going to get you directionally correct faster than trying to count every penny.

--

--