Back in 2017, Dove released a short but highly controversial ad on Facebook. Even though the GIF ad lasted just a few seconds, the backlash over its racial insensitivity immediately caught fire. The company promptly removed the ad and apologized, but the damage had been done. Threatened boycotts of Dove spread across social media, and the brand made headlines for all the wrong reasons.
While we frequently hear about the big “what were they thinking?” screw-ups, it’s actually very common for a brand to spend money on an ad that causes “backlash,” one that not only won’t work, but will actually reduce purchase intent and/or harm brand perception.
In aggregating all of the creative tests that Civis ran in 2018 (predominantly 30- and 60-second television spots from large companies, but also digital and audio ads), we found that not only were 75% of the ads we tested proven to be statistically ineffective, but more than 10% cause backlash. Considering the amount of money spent on advertising each year, that’s a big problem.
As a numbers guy, I’m not completely surprised by these findings — the way many major brands test ads today is scientifically flawed. Most use pre-flight focus groups and panels, which are inherently problematic as they rely on collecting the opinions of a few people that don’t accurately represent the full population (for starters, they have the time to spend participating in a focus group). Dynamic creative optimization is important, but it tests response to an ad that’s already in-market — when, in the case of Dove and many others, the damage is already done.
So, what’s the solution? Of course being more conscientious when planning is step one, but there needs to be a better balance between post-launch optimization and pre-launch analytics. Marketers must apply scientific rigor to their testing of ad creative before it’s ever released to the public.
When we tested the Dove ad, we unsurprisingly found it caused backlash — respondents who saw the ad were 3% less likely to purchase soap from Dove, and among respondents who were 65 or older, the likelihood of purchase dropped a full 9%.
A few other insights from our meta-analysis for those who are curious:
- Our research shows nearly no relationship between what people “like” and what actually changes their mind. I’ll save this one for another time because there’s a lot to unpack.
- The difference in performance between the best ad and the worst ad in a given test was not trivial. The best ad was, on average, 13 percent better than the worst ad, and the top 10 percent of ads were 25 percent better than the worst ads.
- Among demographic subgroups, both positive and negative ad effects were often larger than for the overall population. For example, with the Dove ad, respondents who were 65 or older were an additional 6% less likely to buy Dove compared to the total population.