Creative Fatigue: How advertisers can improve performance by managing repeated exposures
Introduction
Creative fatigue can be a problem on our system, but it doesn’t need to be for you. Using data, we can turn what was a headwind to performance into a tailwind. If you are interested in better understanding creative fatigue, or you suspect it could be a problem for your ads, then this post is for you.
In this post, we will discuss in depth what we mean by creative fatigue, how common and impactful it is, and ultimately what to do about it. Here is a brief look ahead into what we will discuss in detail throughout this post:
We define creative fatigue as the effect that occurs due to users seeing the same creative (the visual elements from an ad) repeatedly. Notice we are focused on the creative rather than the ad. We will discuss this more later.
Creative fatigue can be a problem. The data shows that it is common for users to see the same creative multiple times, and as this occurs, that conversion rate starts to decline and cost per action starts to increase as a result.
On an individual level, we can use creative fatigue as a signal to improve performance by focusing on adding in new creative into ad sets suffering from fatigue. These insights have proven useful for advertiser guidance in the Ads Manager surface as well as for improving ads delivery optimization. Guidance to reduce fatigue can improve conversion rate by on average 8% for high fatigue cases (as shown in the test below).
Defining Creative Fatigue
We define creative fatigue as the effect that occurs due to users seeing the same creative (the visual elements from an ad) repeatedly. Notice we are focused on how many times a user has seen a given creative, rather than an ad. This is because multiple ads may have the same creative elements, but a user would experience both ads as the same.
Below, we observe an association of repeated exposures with reductions in conversion or click through rates for the same user. This may be because users seek novelty, and once repetitions are high enough they are no longer interested in the creative. It may also be that repetitions lead to “ad blindness” where users tune out the creative altogether. Crucially, in either case, this reduction in conversion rate can be directly improved by adding a new creative of equal or greater quality being delivered to the same user. In this way, creative fatigue is distinct from a more general audience saturation effect that could emerge from repeated exposures to a brand and that simply refreshing creative would not resolve. See the notional diagram below:
Alternatively to refreshing creative, targeting new audiences could also resolve this problem. However, the finite nature of audiences to target creates a strong motivation to develop new and differentiated creative assets that counteract this phenomenon.
Measuring Prevalence of Creative Repetition
Before we get to how prevalent creative repetitions are, let’s take a quick detour for why we do not suggest using Frequency to measure creative repetition.
Dangers of measuring Frequency
Advertisers already understand the frequency metric and many creative rotation strategies revolve around keeping frequency low. For those who don’t know, “frequency” is a commonly reported metric for ad campaigns. Frequency is a simple calculation representing the number of impressions per distinct user:
But there are two big problems with our reported frequency metrics that prevent them from being good signals of creative fatigue.
Problem 1: Frequency is at an ad/ad set/campaign level. Fatigue happens at a creative level.
If the creative is the same (or very similar) across multiple ads, then the user will likely fatigue from seeing both ads. That means that users can fatigue from ads across campaigns, placements, optimizations and even those with similar versions of the same creative.
To better capture this, we tracked creative/user exposures at the level of the creative rather than the ad.
Problem 2: Frequency isn’t the relevant metric
This one gets a little more abstract.
Frequency presents an average effect over a period of time and not the marginal effect. Consider this unlikely example for a given ad:
Day 1: 100K users see the ad 1 time
Day 2: 50K of the same users see the ad 2 times
Day 3: 25K of the same users see the ad 4 times
What is the frequency of this ad over the three days? Using the above calculation the answer is 300K impressions ÷ 100K users which gives us a frequency of 3. But on Day 3 the users we are reaching have actually seen the same ad 7 times by the end of the day.
Frequency counts are also very sensitive to the time period of the calculation and not always easily produced at the level of the creative which may span multiple ads with overlapping audiences.
Now, let’s look at how common creative repetitions are.
Users experience repeated exposures to the same creative often in our system. The mean count of user/creative previous exposures across all Meta ad impressions is 4.2 with over 19% of ad impressions having been seen more than five times. Counts are produced over a 30 day lookback window.
It varies somewhat across advertiser segments but is especially prevalent with ads that optimize for link clicks and offsite conversion events. By way of contrast brand advertisers have dramatically less repetition of this sort in part due to frequency caps that advertisers impose.
Measuring Impact of Creative Repetition
Now that we understand how common creative repetitions are, we need to understand how strongly they are associated with performance changes.
Assessing the impact of these repetition on click-through rate can be done naively by looking at the click through rate by impression repetition count:
Here we clearly see that click through rates fall with repeated exposures to a given creative.
Controlling for Audience Saturation and Ad Set Level Conversion Rate
So far this analysis neglects a few potential confounding factors:
- How do we differentiate between creative fatigue and “audience saturation”? For example, if an audience/user were to become oversaturated with impressions a change of creative may not be able to fully recover earlier performance. We want to specifically measure the effect that can be counteracted by refreshing creative for the same users.
- Different advertisers, ad sets, etc. have very different baseline click through conversion rates. If the representation of ad sets, advertisers, etc. vary as a function of repeated exposures, we need to control for this confounder.
In order to effectively control for these factors, we pulled clicks and matched them to the impressions they came from to determine, in a multivariate analysis, how large the decline in click through rate is as users see the same creative multiple times.
The model is a logistic regression that predicts the likelihood of a click given a few variables:
- The number of previous exposures at the level of the creative
- Previous exposures from the same ad set as well as account (a proxy for audience saturation)
- Estimated click through rate (eCTR). Since every ad impression doesn’t result in a click, we utilize eCTR for impression level granularity. Over a large sample, eCTR should equal CTR
- Ad set historical eCTR for the ad set. A proxy for the overall rareness of a click from a given ad set
- First impression eCTR: The eCTR of the user/ad pair on the first repetition. Controls for selection effects at the user level
Here are the results for a selection of relevant ads. We’ve excluded dynamic product ads due to complexity of how creatives are defined as well as some ads like brand ads that don’t optimize for easily tracked proxies of direct response value like click or conversion events.
It’s worth pointing out that we have some multicollinearity here between the creative fatigue feature and the audience saturation features. But even if we remove the audience saturation variables the creative fatigue coefficient is similar.
We ran many versions of this regression with many different covariates that we will not enumerate here. We tested interaction terms for different advertisers segments and different optimization models. We modeled post click/conversion events instead of click as well as video view and other optimization goals. We built interaction terms based on user level features (age, gender, etc) as well as different ad format types (static, video, carousel, etc). Across all breakdowns this effect was roughly the same magnitude and statistically significant.
Interpreting Coefficients
These coefficients can be challenging to interpret at first glance, in particular because of the log transform involved. If we assume all else is held constant and only vary the number of the exposures to the same creative, the reduction in likelihood of a click follows the shape of (N+1)^-.43 where N is the number of times the user has seen the same creative prior to that impression (ie. indexing starting with zero). In case that isn’t helpful, here a plot of that relationship:
This means that at 4 repeated exposures the associated likelihood of a conversion drops by about 45%.
Wear-in period
The above analysis brings into question a common belief that many marketers have — that creative repetitions are helpful up to a point. That creative “wear-in” before they fatigue. However, the above analysis (and further analysis looking for this) found no evidence for this effect with direct response objectives. Clicks and conversions appear to become monotonically more expensive with repeated creative exposures. It’s worth noting that brand objectives like ad recall may behave differently.
Note, that this does not mean that there is no value to repeated exposures. Just that that repetition is optimally distributed across a large number of different looking ads.
Experimental validation
This analysis prompted an experiment to see if we could mitigate creative fatigue in a way that improved performance. In particular, we wanted to see if adding in new creative into a fatigued ad set would result in improved performance in a dose dependent way.
Creative Fatigue Level Metric
For a given impression, we can estimate the creative fatigue level as:
Creative fatigue level = 1 — (N+1)^-0.4
This is a conservative interpretation of the regression above, where creative fatigue level of 0 corresponds to no fatigue and a level 0.2 suggests an average associated drop in ctr of 20%.
To estimate the creative fatigue level for a creative, we average the creative fatigue levels across impressions.
Experimental Design
The high level experimental design is a simple two cell split test with two phases:
We ran this across roughly 26 thousand cases using scalable testing infrastructure.
Results
After the first 7 days, we would have a distribution of fatigue levels for creative A.
During the second 7 days we observed a dose dependent improvement: the higher the creative fatigue level, the larger conversion rate improvement we saw upon adding a new creative.
This suggests that adding in new creative into fatigued ad sets has an overall causal improvement in conversion rates.
Conclusion
We define creative fatigue as the effect that occurs as users see the same creative repeatedly. Creative fatigue appears to be a serious consideration for advertisers optimizing their performance as this analysis shows that users commonly see the same creative, and that users’ responsiveness to a given creative degrades steadily as they see it multiple times.
To manage creative fatigue, this suggests that there is great value in adding new and diverse creative into ad sets experiencing performance declines. Over time, we aim to help measure and monitor creative fatigue with alerts in ads manager, new product offerings, and general guidance that we look to establish experimentally.
Authors: Lucas J.; Alex D. ;Matt M.