LTV Series

CAC payback period benchmarks

When the ideal world faces reality

Paul Levchuk
8 min readJun 10, 2024

In the previous post, I presented a very near-to-real-life User Acquisition case. I calculated all corresponding metrics, including CAC, LTV, and CAC payback period.

As a rule, calculating a metric is not the most challenging part. The most challenging part is interpreting a metric: whether the result is great, close to average, or unsatisfactory.

To evaluate the resulting metric, we usually use our experience: what we have seen before in this business or similar ones. Another option is to check open sources where someone has already collected stats about your metric of interest.

Today we will talk about the CAC payback period benchmarks collected by Lenny Rachitsky. Lenny did a great job contacting the most prominent growth/marketing leads, interviewing them about the CAC payback period, and distilling it into a benchmark table (see below). That’s how he released his famous ‘What is a good payback period?’ article in Dec-2021.

Some of you may say, that’s a bit outdated stats, but according to my experience, CPM/CPC/CPA are only growing over time as media inflation is happening here. So, the CAC payback period only gets longer.

Let’s return to Lenny’s CAC payback period benchmark table. It looks like this:

CAC payback period benchmark table. Source: https://www.lennysnewsletter.com/p/payback-period

From the table above, for B2C businesses there are the following benchmarks:

  • the payback period ≤ 1 month is GREAT
  • the payback period ≤ 6 months is GOOD
  • the payback period ≤ 12 months is OK
  • payback on the 1st transaction is an Exceptional case

Important caveat: to correctly assess the CAC payback period we need to give each paid campaign the same time to pay back costs invested into it. In my dataset, each paid campaign has payment stats for up to 36 periods (~3 years).

Now it’s time to get to the data I generated previously and apply to it Lenny’s CAC payback period benchmarks.

Paid channel level

In my previous post, I gave you some stats about the Paid channel. The overall performance looks like this:

Paid channel overall performance.

On a very high level (that is on the Paid channel level), the CAC payback period = 9 periods (1 period = 30 days).

According to Lenny’s CAC payback period benchmarks, this overall result can be considered between GOOD and OK.

Since nothing is interesting on the Paid channel level, let’s drill down to the campaign level.

Paid campaign level

By breaking down the Paid channel stats by campaign we can learn how the CAC payback period varies between campaigns.

Paid channel: CAC payback period by campaigns (from the previous post).

From the campaign table above, we can learn that from 12 paid campaigns:

  • 0 paid campaigns that can be considered Exceptional
  • 1 paid campaign can be considered GREAT
  • 4 paid campaigns can be considered GOOD
  • 3 paid campaigns can be considered OK
  • 4 paid campaigns either have long CAC payback periods or haven’t paid back at all

In other words, paid campaign distribution looks like this:

  • GREAT — 8%
  • GOOD — 33%
  • OK — 25%

The important moment here is to understand that the paid campaigns stats above are also aggregated. It means that each paid campaign ran for 5 months and the results above were aggregated on the campaign level.

Paid campaign by month level

The most interesting things are happening on this level. That’s some kind of operational level that the User Acquisition (UA) team is working on.

Below is the table with the Costs that the UA team invested in acquiring new users and the net revenue that each campaign generated during the next 36 periods.

Costs vs net revenue by paid campaigns which ran from Jan to May.

The most important learnings from the table above are:

  • In different months UA team tested different campaigns. That’s why the cost breakdown by campaign from month to month is different.
  • There is no one campaign that was stable in terms of costs and revenue. Sometimes costs varied, and other times revenue was volatile.

Now, let's add more complexity and calculate the CAC payback period for each paid campaign launched in a specific month.

The detailed CAC payback period table looks like this:

CAC payback period by paid campaigns that ran from Jan to May.

I bet you didn’t expect such variability, did you?

You can’t say whether your UA team works great or not just by applying one benchmark. To find an answer you need to figure out your data distribution and then compare it with some benchmarked distribution.

That’s the case when the ideal world of benchmarks faces reality.

Paid campaign performance distribution

Let’s try to figure out CAC payback distribution from paid campaigns that ran from Jan to May.

CAC payback period distribution.

There are a lot of interesting things that we can learn from here:

  • we finally managed to catch the case when the campaign paid itself back instantly. Unfortunately, it’s a very rare case. That’s why I even didn’t color it.
  • GREAT and GOOD combined are about 35% of all cases. On the campaign level, it was 41%. I’m considering both cases as positive and colored them green.
  • OK cases decreased from 25% to 10%. Reason? Some campaigns during testing turned out simply BAD or became LOSSES. I’m considering this case as neutral and colored it yellow.
  • BAD and LOSSES combined are about a whopping 52% of all cases. On the campaign level, it was 33%. I distinguish these cases. BAD still contributes profit even if it takes too long time so I colored it grey. LOSSES are losses and that’s why I colored it red.

The last point reminds me of the famous quote: “Half the money I spend on advertising is wasted; the trouble is I don’t know which half”.

We managed to figure out which paid campaigns at which periods were wasted. But unfortunately, to do it we have already spent money and waited for 36 periods. A challenge like this is the main one-million-dollar question that every marketing team tries to solve.

Are benchmarks useful at all?

The brutally honest answer: partially useful.

Let’s return to the chart “CAC payback period by paid campaigns that ran from Jan to May”. Based on that chart we can say the following:

  • Only campaign 13 was quite stable from month to month.
  • The next 4 GOOD campaigns (16, 18, 15, 19) had months when they were in a BAD state or even generated LOSSES.
  • The LOSSES campaigns (23, 24) didn’t have any month when they performed at least as OK state.

As you can see the real world is not static and strictly following benchmarks can simply stop your business.

Probably we should turn some campaigns off?!

On the one hand, it doesn’t make sense to turn GOOD or OK campaigns off as there is no guarantee that the newly launched campaigns will be better and stable. We need to monitor them, figure out why they stopped working, adjust them, and move forward.

Moreover, big ad networks like Google Ads or Facebook Ads need some time to figure out which traffic fits your product. If you stop some campaigns for a week, then returning to the stop point will be simply impossible. You will need to wait until their ML algorithms will re-learn again from your product.

On the other hand, it makes sense to heavily adjust BAD or LOSSES campaigns. If the campaigns’ adjustment doesn’t help, then… continue reading this post further.

So, it’s not a problem if some of your paid campaigns will fail from time to time. The key is to have a proper distribution of paid campaigns with corresponding CAC payback periods (GREAT/GOOD and others).

What else can we do here?

There is a great saying: “You can’t put revenue into your bank account, only profit”.

To figure out whether the UA team is doing really badly, let’s add profit here. This could help to learn whether UA team experiments are too costly for business.

Profit and ROI summary.

From the Totals in the table above we can learn that:

  • Only in Jan, the UA team had a negative cumulative ROI = -4%.
  • Even taking into account regular losses by paid campaigns 23 and 24 the UA team still managed to get quite good cumulative ROI by months: 60+%. Moreover, when cumulative ROI dropped down to 31% in Apr it was not related to these two LOSSES campaigns.

So, in terms of finance, within ~3 years UA team managed to show us quite decent results.

Wrapping up, I would like to show you the distribution by profit:

Paid campaigns profit distribution.

There are a few interesting moments here:

  • Even if Exceptional paid campaigns managed to generate 13% we barely can rely on them. Moreover, not always fast payment is a strong predictor of large profits in the future (e.g. in Apr, the profit of paid campaign 15 is rather small).
  • GREAT and GOOD are workhorses that generate the majority of profit. Even if you have many GOOD paid campaigns and much less GREAT campaigns this also could work well.
  • Having LOSSES campaigns is part of the game. You can’t avoid meeting them, but you can manage them.

SUMMARY:

  1. The proposed B2C benchmarks do make sense in terms of profit. It’s always a good idea to validate whether they make sense for your business model.
  2. Don’t strive to get only GREAT or GOOD campaigns. It’s actually impossible. Moreover, any GREAT/GOOD campaign can fail from time to time. Be ready for this.
  3. What to do with BAD and LOSSES campaigns is an open question. Sometimes it makes sense to turn them off, sometimes — it doesn’t. It’s all about your goal: company profit vs customer growth.
  4. The key is to have a proper distribution of paid campaigns with corresponding CAC payback periods (GREAT/GOOD and others).

In the next post, I will compare which optimization strategy (CAC payback period or LTV / CAC) is better and, more importantly, in which cases.

--

--

Paul Levchuk

Leverage data to optimize customer lifecycle (acquisition, engagement, retention). Follow for insights!