The Hidden Cost of Email Fatigue (and What to do About it)

Ardavan Beigui
13 min readOct 31, 2019

--

Since we launched Tinyclues in 2013, I’ve had the good fortune to meet over 3,000 marketers, from CMOs to email marketing associates, from global Fortune 500 companies to small family businesses. In these conversations, one thing has continuously stood out: the contradiction between marketers’ deep caring for their customers and “not wanting to pester them”, and what reality dictates them to do on a daily basis:

“We probably send too many messages to our customers; but each message drives revenue, and that’s what we’re measured on. Our unsubscribe rates seem low, our competitors are even more aggressive, and we have a lot of pressure to drive results. What choice do we have?”

In sum, all marketers face the dilemma of being customer centric while driving revenue, and using fatigue measurements to balance their actions.

However, could it be that the most commonly used measure of fatigue — the unsubscribe rate — is flawed? Could it be that we’ve all been measuring in the wrong place?

Part 1 — What is fatigue and how can it be measured? How is fatigue factored into marketers’ campaign planning today?

Fatigue can be defined as the decline in your customers’ engagement over time as a result of the way you message them.

As such, it can’t be measured by unsubscribes, even if customers opting out of your list are a clear sign of fatigue (as are customers who “report spam”).

By definition, fatigue can truly be measured only through the decrease over time in the engagement you get from your campaigns. This requires scientific protocols like control groups over potentially extensive periods of time, especially if you want the conclusion to be statistically significant. Many organizations might be willing to pay this high price when it comes to measuring the incremental contribution of a given marketing channel to their overall revenue, but they won’t necessarily be willing to pay the same price to effectively measure how much fatigue is impacting their customers. And this is understandable, given that marketers are almost always measured on the revenue and engagement they drive now, NOT on how much fatigue they avoid, nor how much future revenue they contribute to today.

Unfortunately, as the unsubscribe rate is the only “fatigue metric” immediately available in most email campaign tools, it has naturally become the default indicator of how much fatigue is experienced in a given campaign…

I love asking the following to marketers:

a. “What is your unsubscribe rate? How do you feel about it?”

Most answers go something like: “The unsubscribe rates are fairly low, under 0.1%. Our customers don’t seem too annoyed by what we send them.

The truth is that unsubscribe rates always appear this low, which does make them seem insignificant. That also makes them hard to compare across campaigns, as we sometimes have to go several digits down, to the hundredths or thousandths of a percentage point, to notice a difference. That lures many marketers into believing that “fatigue is under control” and “unsubscribe rates are stable”.

That is deceiving, because the unsubscribe rate, defined as the ratio between the number of unsubscribes and the messages sent, is a shallow metric (more on this below).

b. “When a customer unsubscribes from your list — do you know how much future revenue is lost? Can you put a $ value on an unsubscribe?

99% of the times, the answer is “Mmmh, not really.” Many marketers have built LTV (Lifetime Value) models for their customers; but these models often don’t include “unsubscribes”, therefore most marketers can’t answer the specific question about the loss in future revenue resulting from an opt-out.

One marketer shared with me how they estimated that value. They would look at the average revenue that an opted-in customer, receiving messages weekly, would drive over 12 months. They then modeled, for each unsubscribe, how much revenue was lost over the 12 months following the unsubscribe. In their case, that number was $200 (email is a big driver of traffic in their business).

If a campaign drives 0.1% unsubscribes out of a list of 3M, that’s 3,000 customers that can no longer receive emails as a result of that one campaign. In our example above, that would represent 3,000*$200 = $600,000 of future revenue loss, as a direct result of a single campaign. If there was a way to decrease that rate from 0.1% to 0.07% (without loss of revenue), that would be 900 unsubscribes saved — or $180,000 in future revenue.

Maybe your list size is not 3M but 1M, and you’ve measured that your email programs drive nothing close to $200 of revenue per customer per year, but $25. An unsubscribe rate of 0.1% on a message you send means 1,000 customers you can’t contact anymore, or $25,000 of revenue lost in the coming 12 months. Is that negligible?

Bonus material: let’s look at the presumed “low unsubscribe rate” from yet another angle. Let’s say we start with a 3M list, and get a 0.1% unsubscribe for each message we send out. At the rate of 2 messages per week — or 100 over 12 months, we’ll have lost approximately 300K customers from our list — about 10% (for those who pulled their calculators or excels out — yes, the exact number is 285,624 unsubscribed contacts!). That’s high, but bad optics can alter our vision here as well: with all the new customers signing up through acquisition efforts, as long as the total figure remains around 3M, the illusion of stability remains.

c. “What is your messaging frequency? Why?

Most answers are a mix of “our frequency is based on industry & competition benchmarks” , “habit” and “it depends on each customer’s responsiveness to emails”. The numbers do seem to vary based on the industry: fashion brands typically seem to send 3–5 times messages a week. Flash sales and discount-driven retailers send daily (or even more!). Many non-discounted travel companies tend to send 2–3 messages per week, sometimes less.

Many marketers adjust the frequency based on their customers’ email activity: some messages are sent only to customers that have shown email engagement the past 90 days; highly engaged customers may receive more messages, etc. Such approaches improve deliverability, rather than decrease fatigue; they also tend to shrink the list size over time through self-fulfilling prophecies (customers engage more because they receive more, and engage less because they receive less).

When I then ask: “How did you, as an organization, come to the decision to plan this specific number of messages per week?”, the best answer I get is “We conducted control groups 2 years ago and found that sending more than 3 messages was problematic, as each message after that tended to drive a lot more unsubscribes.” It’s a great answer! But, it was 2 years ago. Things may have changed, and such tests aren’t run every year to update the findings. In fact, most of the time, the answer is really “it’s what we’ve been doing for a while, and right now we have other priorities so we won’t be investing much in fatigue management.” (Read: “we’re behind on revenue and we’re gonna keep blasting”)

d. “If your campaigns don’t drive fatigue, and you’re sending 3 (or 5, or 10) per week — why not double it? Or triple it?

Of course, this question is a bit of a provocation. But it highlights something important: marketers know they drive fatigue, even if the unsubscribe rate doesn’t exactly show it. We know that not everyone’s interested in receiving every message we send, but it’s easier to send them anyway, to avoid leaving revenue on the table.

A major reason why sending 15 campaigns per week can be challenging, is that it’s really, really hard to target campaigns efficiently and go beyond basic segmentation or affinity models (but that is the content of another post to come). If you could go from 5 to 15 campaigns, measuring how they’d drive higher engagement and lower unsubscribes, wouldn’t you do it?

Part 2 —The missing metrics to measure fatigue

Here are the results of 3 campaigns sent by a retailer to their customer list.

Traditional KPIs for email campaigns with the standard and most commonly accepted definition of “unsubscribe rate”

Let’s look at Campaign 1:

• 745,592 emails were sent

• 136,314 opens, so that’s an 18% open rate (136,314 / 745,592)

• 7,777 customers clicked to visit the website, which is 6% CTOR (Click-to-Open-Rate = clicks among openers, hence 7,777 / 136,314) —and also a Click-Through Rate (CTR) of 1.04% based on the sent messages (7,777 / 745,591)

• based on that company’s attribution rules, they measured 51 orders, which is a conversion rate for the visits of 0.66% (51 / 7,777)

• they drove $8,322 in sales with those 51 orders

• the RPM (Revenue Per Message) is $0.011 ($8,322 / 745,592). The chart above displays the RPM as the Revenue Per 1,000 Messages — hence the $11 figure. Different marketers like it different ways —they’re both valid.

• 543 people unsubscribed, which represents 0.07% of the number of customers who were sent a message (543 / 745,592).

If we compare Campaigns 1 & 3, we can see that Campaign 1 has a 40% lower unsubscribe rate. So, it drives less fatigue than Campaign 3 — or does it? And how would you compare Campaigns 1 & 2, that both have an unsubscribe rate of 0.07%?

Answering those questions with the available metrics would be in vain, because we lack a way to measure the fatigue caused by a campaign relative to the good it drives (engagement and sales). Unsubscribes are currently measured relative to the number of sent messages, and that’s a problem: if we trained an algorithm to optimize for this unsubscribe rate, it would recommend sending messages to customers that don’t open emails, hence they can’t unsubscribe. How would that be a good thing?

What if, instead of looking at the “number of sent messages”, we looked lower in the funnel, at metrics that we actually care about? Like:

  • Opens
  • Visits (clicks)
  • Transactions
  • Revenue

Let’s find out how that would change our paradigm.

2.1 — introducing the “unsubscribe-to-open” rate

Looking at the same 3 campaigns, we’ve now introduced one more metric in the column to the right, called “unsubscribe-to-open rate” (# unsubscribes / # opens):

New column introduced to the far right: “unsub-to-open rate”

This metric literally measures the damage done by the campaign to customers that actually see it. One of its many benefits is that it will expose those campaigns that have a deceiving subject line, luring customers into opening the email, only to disappoint them when they see the actual content: in such cases, the “unsubscribe-to-open” rate may skyrocket.

How do the three campaigns compare now?

• Campaigns 1 & 3 are now pretty close — 0.4% means that for every 250 customers that see the message, 1 customer decides to opt out.

• Campaign 2 is doing better — with 0.23% it takes about 430 customers to see the message in order for 1 of them to opt out.

It’s already quite a different read!

2.2 — Introducing the “unsubscribe-to-click” rate

New column introduced to the far right: “unsub-to-click rate”

The “unsubscribe-to-click” rate is the ratio between the unsubscribes and the total clicks (or website visits) a campaign drives: for Campaign 1, it’s 543/7,777=7%.

For every 100 clicks we get in Campaign 1, we drive 7 unsubscribes. That metric captures the cost of driving traffic to the website.

Well— the three campaigns look different again: for each actual engagement it drives, Campaign 1 does the most damage, while Campaign 3 does less than half of Campaign 1. Campaign 2 wins by driving only 2 unsubscribes for every 100 visits.

So, which campaign drives the most fatigue? Before we answer, let’s add a final twist.

2.3 — introducing the “$ per unsubscribe” metric

Last Column introduced to the far right: “$ per unsub”

I agree, it sounds like a strangest metric, but it’s actually fairly simple: the formula is (revenue in $ / # unsubscribes), and this measures how much revenue each campaign drives every time it leads 1 contact from our list to unsubscribe. It’s therefore a metric we want to maximize.

• Every time Campaign 1 drives $15 in revenue, it also drives 1 customer to unsubscribe.

• Campaign 3 does better — in fact, 3x better, because every time it drives 1 unsubscribe, it’s also driving $48 in revenue.

• Campaign 2 wins with a whopping $71 per unsubscribe.

Bonus material: the $-per-unsubscribe metric can also be seen as a proxy for the remaining lifetime revenue that your customer base can drive through email. Let’s say your average customer campaign has a $-per-unsubscribe value of $50. It means that you are willing to exchange $50 for 1 unsubscribe in your list. If your list is 3M customers, it means you’re in a way willing to exchange your ability to send them emails against a total of $50*3M = $150M. At the very least, it means that, with the current fatigue driven by your campaigns, your 3M customer base CANNOT drive more than $150M through your email campaigns. So, what if you’re currently running campaigns that have a “Campaign 1” level of performance, and you could improve them to the performance of Campaign 3? That “email remaining lifetime revenue” would jump from $150M to $480M. Not bad!

Part 3 — All of that looks great! So, what’s the conclusion?

Even if measuring fatigue thoroughly and scientifically is difficult, time-consuming, and arguably impossible, this article provides an alternative and pragmatic approach that can change the way you look at your campaign performance and make decisions about the campaigns you run. The traditional “unsubscribe rate” is a weak metric, because marketers don’t care nearly as much about the number of messages sent, as they do about engagement (opens, clicks) and sales.

That’s why it’s critical to measure unsubscribes against engagement and sales. The numbers are more meaningful, they’re easier to compare, and they don’t all seem ridiculously small and negligible.

So, which campaign drives the most fatigue? Let’s see how they rank based on which rate you look at:

Ranking of all 3 campaigns based on the rate you choose to look at

Campaign 2 is the clear winner.

Campaign 1 seemed like a big winner with the traditional unsubscribe rate, but is actually the biggest loser on lower-funnel metrics (clicks and $). It is actually an awful campaign when it comes to “fatigue vs. revenue” with a $-per-unsubscribe ratio 3x lower than Campaign 3 and 5x lower than Campaign 2. Either it’s a terrible campaign, or it was sent to the wrong customers.

Campaign 3, which was the clear loser using traditional metrics, becomes #2 and is actually a solid campaign. With 844 unsubscribes, it can initially seem like a poor performer against Campaign 1 (543 unsubscribes), as it drives 55% more unsubscribes. But… it also drives a massive 382% more revenue. It is without a doubt a better campaign from a rational business perspective. However, the old “unsubscribe rate” wouldn’t show that. It would show the opposite. And mislead us.

So in the end, which metric should you use? Well, you will have to figure that out :-) My advice is:

Whatever you do, DO NOT restrict yourself to the antiquated unsubscribe rate.

1. If you’re running campaigns with a branding purpose, to drive visibility, why not look at the “unsubscribe-to-open” rate? It answers the question: “what’s the percentage of customers that are irritated by what they see?

2. If you’re focused on driving visits to your site, then the “unsubscribe-to-click” rate is a good fit.

3. And if, like the majority of marketers, you’re measured on revenue above all else, then the $-per-unsubscribe can be a great way to look at each campaign and determine whether they drive too much damage relative to their revenue.

Note: the first two KPIs can be observed with only email campaign data (unsubscribe-to-open rate and unsubscribe-to-click rate). For the third one, reconciled purchase and email data are necessary.

Adding those 3 simple metrics to your existing dashboards, and reviewing them regularly as part of your weekly Campaign Performance Review, will bring valuable insights into the damage done by your campaigns relative to any engagement metric that matters to you (views, visits, or sales). It will help surface the good campaigns to keep running, and run more of, and realize which ones aren’t working and deserve to be reconsidered. You may be surprised by the findings!

Then, the real upside: you can discover for yourself how you can do more marketing without driving fatigue, and confirm based on your own data that fatigue has nothing to do with messaging frequency, and everything to do with the repetition of irrelevant messages. As long as you’re relevant, your customers will enjoy hearing from you. And you don’t have to take my word for it — you can now measure it for yourself!

For more on implementing these takeaways, or simply to dive deeper into the issues discussed above, simply connect with me on LinkedIn or send me an email at ard[at]tinyclues[dot]com.

This post about Email Fatigue is the first in a series of three that I will be publishing over the coming weeks. It looks at some positive changes marketers can immediately implement, at no cost, simply by looking at their existing email campaigns from another angle. The next post will cover material on how the Amazon behemoth has impacted (in insidious ways) the way marketers build their campaign plans, and what they can do about it. The final post will dive into how marketers can build audacious and business-driven campaign plans that yield world-class results.

--

--