What is the completion rate of a product feature?

Paul Levchuk
5 min readJan 24, 2023

--

In 2010 Google published an exciting behavior framework HEART. HEART stands for Happiness, Engagement, Adoption, Retention, and Task Success. Today we will talk a little about the last but not least product feature aspect — Task Success.

For sure, product feature success can be measured in different ways. That’s why Google proposed to decompose it into such directions:

  • percent of tasks completed
  • time to complete a task
  • error rate

On the one hand, the most obvious and general approach to assess Task Success is the percentage of tasks completed or in another name — the completion rate.

On the other hand, for simple product features, some of the above-mentioned directions are not relevant.

For example, the error rate is reasonable when a product feature consists of several steps and some of them require non-trivial user actions. The time to complete a task is also a function of product feature complexity and is much less relevant for straightforward product features.

So, let’s start our product features Task Success analysis.

First of all, I would like to mention that, as a rule, there are not so many product features that require more than one user action and might be shaped into a product feature’s funnel: with clearly defined start and finish steps.

That’s why the sample that I present in a moment is not going to be long.

In the table below I collected 5 product features that have <ftr>_started and <ftr>_finished events.

For each product feature, I calculated the following metric:

  • [# users started] — users who started interacting with the product feature
  • [# users finished] — users who completed a task with the product feature
  • [# users with errors] — users who experienced errors while using the product feature
  • [% completion rate] — % users who successfully completed a task with the product feature
  • [avg time to complete (s)] — the average time that was taken to successfully completed a task with the product feature
  • [MCC started]MCC coefficient that measures the impact on retention by users who just started interacting with the product feature
  • [MCC finished] — MCC coefficient that measures the impact on retention by users who successfully completed a task with the product feature
  • [MCC w errors] — MCC coefficient that measures the impact on retention by users who experienced errors while using the product feature

I always calculate all behavior metrics on a user level. By doing this, I secure my analysis from possible cases when a small number of power users are using a some product feature extremely intensively and introduce a bias to an analysis by a number of events.

HEART: Task Success analysis.

Please, pay attention that product features in the table above are sorted by decreasing [% completion rate].

As you can see some of the product features have been used by a lot of users (f.e. featureA: 10,355 users or 84% of all users), others just by a tiny fraction of them (f.e. featureE: 422 users or 3.4% of all users).

featureD

featureD is not popular but has the highest [% completion rate] = 99%.

As [% completion rate] is almost 100%, it’s quite expected that [avg time to complete (s)] is very short. It’s just about 2 seconds. It’s a very tiny product feature and there is no cognitive load from using it.

Since featureD is completed so fast it does not generate a lot of instant value and that’s why the marginal effect on retention by completing a task with it is very tiny (MCC increased from 0.1433 up to 0.1446).

featureA

featureA is very popular but has [% completion rate] = 75%.

As [% completion rate] is not close to 100%, we could expect that [avg time to complete (s)] might be a little bit longer. It’s about 8 seconds. featureA is related to user input activity during user onboarding. Some % of users are stuck to completing it and that’s why some cognitive load is happening here.

An interesting thing happens if we look at how completing a task with this product feature impacts user retention.

[MCC started] is negative but that’s expected. featureA is used during user onboarding and at that time users are not engaged yet and [% retained users] is lower than average and that’s why the MCC coefficient is negative.

featureA does not generate an instant value as well but for users who manage to complete a task with this product feature the marginal effect on retention by completing a task with it is quite high and positive (MCC increased from -0.1472 up to -0.0795).

featureB

featureB is quite popular but has [% completion rate] = 52%.

[avg time to complete (s)] is the highest one for this product feature. It’s about 310 seconds (~5 mins). Obviously that such a long time span to complete a task with this feature means that there is a big cognitive load here.

The key question arises here: does this big cognitive load impacts user retention negatively?

featureB is one of the core product features. By completing a task with this product feature users receive a lot of value from the product. That’s why the marginal effect on retention by completing a task with it is huge and positive (MCC increased from 0.0352 up to 0.1281).

featureE

The featureE is not popular and has [% completion rate] = 42%.

[avg time to complete (s)] is about 27 seconds and this brings us to the conclusion that some cognitive load is happening here.

There are 2 interesting insights about it:

  • By completing a task with this product feature the marginal effect on retention is small and negative (MCC decreased from 0.0816 down to 0.0749)
  • A lot of users experienced errors while using this product feature. But despite this fact, by completing a task with this product feature users received some advanced experience in the product. That’s why the impact on retention by completing a task with it (even with errors) is still positive (MCC decreased from 0.0816 down to 0.0545). Obviously, the marginal effect is negative.

Doing such an analysis like this from time to time I would like to generalize a little bit:

  1. As a rule, the more user spent time completing a task with the product feature the lower [% completion rate]. So these metrics are negatively correlated and it’s expected.
  2. As a rule, there is a very weak negative correlation between [% completion rate] and [MCC finished]. It means that even if some product features have low completion rates they nevertheless could bring a lot of value to customers and that’s why they have a high positive effect on user retention.
  3. As a rule, there is a weak positive correlation between [avg time to complete (s)] and [MCC finished]. It means that even if some product features take a longer time to complete a task with them they nevertheless could bring a lot of value to customers and that’s why they have a high positive effect on user retention.
  4. Always cross-check your analysis with the impact on user retention.

--

--

Paul Levchuk

Leverage data to optimize customer lifecycle (acquisition, engagement, retention). Follow for insights!