Product Metrics that Signal MAU Impact

Alexander Yoon
alexandersyoon
Published in
7 min readMay 8, 2022

A good product feature does not mean that it is an impacting feature.

It was something I found shocking as I learned working as a Product Data Analyst.

Early startups do not have the pleasure of expending all their resources in improving product experience and feasibility. With extremely limited resources for survival, early startups need to make impact fast. The ‘move fast and break things’ kinda mentality.

Monthly Active User (MAU) is a metric that many pre-IPO stage B2C startups choose to prove the impact their product carries. The difference definitely exists depending on respective business domains, but projects that can make meaningful Active User (AU) impact are most definitely prioritized.

At LINER, AU metrics are taken care by both Performance Marketing and Product teams. Performance Marketers contribute to AU growth by acquiring new users to the product. Product Managers/Product Owners contribute to AU growth by releasing and optimizing new features that target product retention.

**making AU impact means to place significant influence on AU metrics like MAU and DAU.

While there are features that boost AU numbers right after being released, features can also make zero business impact, regardless of the amazing ideas and the time that went into building them. Making zero business impact basically implies that sooner or later you won’t see that feature again in the product.

Then exactly which features get to make AU impact? There is no clear explanation out there. However, as I supported Product Owners as a data analyst and acted as a Product Owner myself, I realized that there exists a certain framework.

Today I wanna share the four key metrics that product teams need to optimize in order to make AU impact. Success Metric, Guardrail Metric, Coverage Metric, and Entry Metric.

Success & Guardrail metrics indicate product feature’s Value Delivery, and the metrics that I call Coverage & Entry metrics indicate product feature’s Impact.

Value Delivery

Success Metric

Success Metric indicates how many of the users end up achieving the goal after entering the feature’s UX flow. In other words, Success Metric keeps track of how well the feature is fulfilling its purpose.

Depending on the project, success metrics could be cart checkout rates, cross-platform installation rates, subscription conversion rates, or even week-1 retention. Different projects with different success metrics are carried out according to a company’s priorities.

Success Metric determines whether the new feature delivered its intended value to the users. Therefore, I consider success metrics as not only to be indicators whether the project met its set goals, but also as a decision factor whether iterations are needed for the project.

The more skilled a product team is at setting success metrics, it helps the overall organization to accurately judge and improve the product feature’s impact and value. I believe this is the starting point of a good product culture.

Guardrail Metric

You’ve probably heard of Guardrail metrics in the field of experimentation.

Guardrail metrics are KPIs that are believed to be of risk/danger to the product if to go off the rail. (no wonder its called a “Guardrail” metric!)

There are even times when a new project has to be let go if it shows to be incapable of controlling the guardrail metrics, generating high cost to the company. Guardrail metrics spiraling off of its secured range is a risk product teams have to be look out for in the process of proving product hypotheses.

Long story short, optimizing guardrail metrics is about minimizing risk.

While there are users that successfully follow our UX flow,
what are the trade-offs?

A lot of SaaS products monitor uninstall rates, refund rates, churn rates as their guardrail metrics. In the case of LINER’s browser extension product, we set extension uninstall rates as a key guardrail metric and monitor it carefully.

This is the general reason as to why product teams do not release new feature to all cohorts right off the bat. They release the feature to several groups of users at a time, monitor KPIs, and then release it as an official part of the product.

Impact

Coverage Metric

AU increase means more general users are becoming active users of the product. To make such quantitative impact with a feature, product teams need to raise coverage to expose & acquire more users to the feature’s UX flow, and deliver value to the acquired users within the UX flow and convert them into active users.

The metric that I wish to emphasize the most in this article is what I call a Coverage Metric. Coverage metrics are, in my definition, percentage of users a feature is tapping amongst the entire user pool.

Coverage Metric represents the impact radius of the feature, a portion of the product population that can be acquired by the feature. Amplifying both the positive effects and the negative effects of the product feature is a key characteristic of a Coverage Metric. Product teams can make significant fluctuations in both success & guardrail metrics by controlling or optimizing coverage metrics.

A good example of controlling coverage is a partial release of a new feature in the scenario of AB tests. With a guardrail metric that is yet to be optimized, high coverage leaves the company exposed to high risk. To avoid this, product teams would control coverage and expose feature to only a sample of users and monitor KPIs before a full product release to the entire user pool.

Another example of controlling coverage can be found in a marketing case. When you work with marketers who use solution tools like Braze, you sometimes hear that the newest engagement campaign had a pretty good CTR, but failed at reaching enough users. This is a scenario where marketers need to optimize their Coverage Metric in order to have their marketing campaigns reach more users.

Entry Metric

If Coverage Metric was the probability that users find the entry point to the feature, entry metric is the probability that users will actually enter the entry point upon discovery.

Think of entry point as the feature’s user acquisition point.

Consider a banner or a slide-up as the entry point to our newest feature. In this case, the probability of clicking the CTA button is our entry metric.

This is where product teams try to get the attention of users with good UX writing or UI designs with appropriate feature context.

When setting entry metrics, it is important to not confuse your entry metrics with your success metrics. Make a clear distinction between the two.

Example : Success, Guardrail, Coverage, Entry

All the four mentioned metrics need to be optimized to the right standard for a newly released feature to create the intended and well-needed impact.

Say that we are the owners of a clothing brand. And we just opened an offline store for our brand. We got a place within the downtown mall and wish to create short-term revenue impact.

What are our success, guardrail, coverage, and entry metrics in this case?

  • Success Metric : The percentage of people who entered our store actually buying the clothes. (making purchase)
  • Guardrail Metric : Daily operating and hiring cost that is created just by keeping the store open.
  • Coverage Metric : Probability that people in the mall will find our store. This is probably why we would want our store to be where everyone at the mall could find easily.
  • Entry Metric : Probability of people entering my store when passing by. This is the point where we are actually given a chance to catch attention. To raise our entry metrics, we probably will decorate the outside of the store to maximize curiosity and interest.

If even one of the four metrics is not optimized, we lose the point of keeping the clothing store open and operating it. The endeavor becomes identified as unsuccessful.

Outro

Value Delivery & Impact are not new concepts.

For Coverage metric and entry metric I just put the names on them, but great product managers & owners always seek and create change in these metrics when utilizing data. Even for those who are reading my article now, you may be slightly disappointed to see that there wasn’t anything you already did not know.

However, I believe it is important to place your own labels upon concepts and create a certain framework when engaging with projects regardless of the field.

There is a lot of data with unorganized context when you release a brand new product feature. This creates confusion in the process of identifying the key problems of the situation. At LINER, we believe and strategize with the Value Delivery & Impact structure for product sprint efficiency in managing feature iterations.

Hope this article finds all the right folks who try to utilize data and strategize for product impact.

Thank you for reading!

--

--