A/B testing emails: good or bad idea?

Colette
Reelevant posts
Published in
3 min readSep 5, 2019

The famous AB test! The Holy Grail of the marketer, this reflex to test the performance of a CTA color or a product visual. By grasping (gently, don’t worry) this cute sin, the idea is to try to give some new elements of reflection on a technique not well adapted to propose the right message in the age of instantaneity.

1/ A lot of work has gone into producing

Designing two html may seem harmless to experienced organizations. Production processes can be as well oiled as possible, the time and resources required to develop these materials with each new campaign weighs heavily on budgets. We can already use a single html to test the zones independently in the email by varying for example the visuals of a header as well as banners!

Some of our customers spend between 4 and 7 hours for each email (choice of content, message creation, workflow in the campaign management tool, test email etc.): let’s be honest, we can’t afford it at a time when the customer experience is played out on a daily basis on all channels.

2/ Are the statistics as reliable as that ?

Testing optimisation hypotheses to deliver an appropriate message, i.e. one that meets the individual’s expectations at a key moment, is necessarily commendable if it is not diverted from its initial objective: to compare two elements. However, I still often see a multitude of elements tested at the same time.

The interpretation of the results, on the other hand, raises questions. By referring to an overall click rate, it is difficult to know which variant really made the unanimous decision and even less to detect the area or areas that made the difference!

3/ Individuals condemned to see only the losing version

It is certain that 20% of your database will be kept on a losing version, which represents a significant part of your audience. By testing in real time, we completely avoid this pitfall by not setting aside anyone, openers or non-openers of emails.

The interaction of individuals with the tested elements automatically makes it possible to select the most efficient version as openings are made, without penalizing a part of the database, thanks to the evolution of the click rate. In general, only 2% of openers will see the lowest performing version of email with real time testing.

4/ No more marketing message subjectivity !

The AB test is supposed to provide the marketer with data to make the right decisions. Decided well in advance of the sending, they have a hard time living with the temporality of the individual: his mood of the moment, his context, etc. A successful AB test campaign would be more in line with a campaign that “lives his life”: the marketer designs it but it is the power of the click rate, and therefore the individual’s attraction for this or that other element, that leads to success. The learning is then done in real time.

To base yourself on a “winning” version in the eyes of criteria that are not well adapted to understanding individual requirements for conducting future email campaigns is to risk cutting yourself off a little further from the path of conversion!

To all things considered, AB testing its emails is:

…a lot of production and interpretation time (and therefore money);

…distorted performance statistics;

…static messages;

…part of the base penalized.

Based on Real Time Testing, we:

…save time with only one html

…detect and delivers the high-performance version quickly & optimizes the performance of each email

…take into account the context of individuals’ openness

…allow you to create message combinations

…learn in real time about campaign trends

What about you ? What has been your experience with traditional AB test in emails?

--

--