LTV Series

LTV and behavioral signals

User actions and Time To Value are not clear answers

Paul Levchuk
6 min readJul 8, 2024

In the previous few posts, I discussed the unit economics on the Paid campaign level.

One of the readers asked me to enrich the research with behavioral metrics. As I have limited data (and free time), I quickly came up with a few simple behavioral metrics:

  • [activity_24h] — this metric calculates the average number of sent inquiries within the first 24 hours
  • [TTV]— this metric calculates the average number of hours it takes for users to send the 1st inquiry (that is Time To Value)

Let me add a bit of context: data is related to a marketplace with 2 sides: users who are looking for a service (demand side), and users who are providing service (supply side).

Today we will dig deeper into the relationship between behavioral signals and LTV.

[activity_24h] signal

Let's build the chart to learn about the relationship between [activity_24h] and LTV. As we expect that LTV can be quite volatile, let’s add information about % of buyers to the chart.

The relationship between [activity_24h] and LTV.

From the chart above we can quickly grasp that:

  • the distribution of [# buyers] is not linear
  • the distribution of LTV is not linear as well

Let’s dig deeper into this.

There is a segment of users who didn’t send any inquiries within the first 24 hours.

In this case, this segment is rather small (~2%) and we don’t need to worry about it too much. However, sometimes this segment of ‘slow’ users is much larger, so adjusting your onboarding for this segment could be very beneficial.

There are also 3 more user segments with different sizes and different levels of user activity:

  • sent 1 inquiry (8% of buyers),
  • sent many (13+) inquiries (63% of buyers), and
  • in between (27% of buyers).

On the one hand, we see the biggest concentration of users with 13+ sent inquiries.

If users send too many inquiries it could mean that our recommendations do not work properly. It makes sense to interview users who sent many inquiries to figure out whether we missed something.

On the other hand, we can notice that when the number of inquiries increases, LTV — decreases (negative correlation).

How is it possible?

In the marketplace, in general, if you send more than one inquiry your chances of getting a service could be higher.

But, if the product overfocuses users on just sending inquiries then chances to receive service and be satisfied are decreasing. Users from the demand side become too busy managing connections with users from the supply side instead of moving forward to get a service. That’s one of the issues that product teams often fail to realize.

While sending inquiries unlocks value, it does not deliver it yet.

Indeed, using some activities as a proxy metric of delivered service is a good thing.

However, blindly focusing on increasing proxy metrics without a deep understanding of the relationship between the proxy metric and the target one is a bad thing.

Now, let's compare revenue per activity level.

The relationship between [activity_24h] and [revenue].

From the chart above we can learn that:

  • the group of users with 1 sent inquiry is quite important as it has the maximum impact on revenue compared to other groups
  • in general, no one magic threshold that can be used to align with

There are a few interesting moments here that I would like to mention.

Firstly, while users who sent many inquiries have lower LTV this segment is big and, as a result, generates a lot of revenue. That’s why to keep safe total revenue, we need to be careful while working with it.

Secondly, there are a few groups (4, 7, 14, and 23) that have a big disproportion between [% buyers] and [revenue], that is they have very low LTV. It makes sense to check corresponding users from the supply side as it could be the case when some of these users are simply unqualified. That’s something you barely find by applying correlation analysis.

Now let’s learn about the Time To Value metric.

[TTV] signal

In general, Time-value-related metrics are very ambiguous. Why?

Their interpretation is heavily dependent on the product and context:

  • in some cases, the more time a user spends in the app, the more this user is engaged, and as a result, the higher the chance he will continue using the product.
  • in other cases, the quicker a user will get the first value, the higher the chance he will decide to use the product further.

Let’s learn from the relationship between [TTV] and LTV.

The relationship between [TTV] and LTV.

From the chart above we can conclude that there are 3 segments:

  • users with [TTV] ≤ 5 hours (4.4%)
  • users with [TTV] in a range of 6 to 22 hours (91.4%)
  • users with [TTV] ≥ 23 hours (4.2%)

These stats are quite insightful, let’s unpack it.

On the one hand, we see a very small % of buyers started sending inquiries very quickly (within a few hours). Why does this % is so small?

  • Is the intent of users from this segment not so urgent?
  • Is UX overwhelmed?
  • Does the recommendation list not work properly?

There is something definitely wrong with the first experience and the product team should invest heavily into user interviews. I would say this insight is probably the most important one in this whole research.

On the other hand, % of buyers who send inquiries in 48 hours or later is quite limited. So, users don’t delay to shelve their needs. In any case, it makes sense to adjust onboarding for these ‘slow’ users as well.

By the way, if we look carefully at the chart above, we can spot that in a range from 7 up to 21 hours, there is a positive correlation between TTV and LTV.

In other words, when users take their time and have found the proper users from the supply side, their LTV increases. That’s why the question of whether to push these users to contact the recommended users from the supply side faster is an open one.

Obviously, we should (constantly) improve our recommendation list. However, some users simply could have some unique or quite specific needs that can’t be satisfied with the recommendation list. An alternative could be to direct these users’ attention to advanced filters to fine-tune search results.

There is also another aspect there: whether we want to control users or make them feel in control. Guess, which option users like and which — don’t? That’s a very interesting direction, but it’s definitely out of the scope of this post.

Now, let’s compare revenue per TTV level.

The relationship between [TTV] and [revenue].

From the chart above, we can learn:

  • the highest revenue we can get from users with TTV = 19 hours (quite ‘slow’ users)
  • in general, no one magic threshold that can be used to align with

So, the biggest share of revenue is related to ‘slow’ users. That’s why to keep safe total revenue, we need to be careful while working with them.

In general, data about activities and TTV clearly show that not always you can find one simple and nice behavioral threshold that can be used as a proxy by the product team. Moreover, using an artificial threshold that hasn’t backed up with data can easily hurt your business.

In other words, fast and smart customers that’s how we think about them. Slow and confused customers that’s who they are in reality. Always learn from your real customers, not the customers in your head.

SUMMARY

  1. LTV can be barely well explained by behavioral signals as LTV is heavily dependent on monetary signals that are out of behavioral scope.
  2. Having more user activities does not guarantee to have a higher LTV.
  3. Having a short Time To Value does not guarantee to have a higher LTV either.
  4. Learning from signal distribution is much more insightful than calculating the correlation between behavioral signals and LTV.

--

--

Paul Levchuk

Leverage data to optimize customer lifecycle (acquisition, engagement, retention). Follow for insights!