How Atlassian Uses A/B Testing to Optimize Their Campaigns

Lauren Smith
Savvy Inbox
Published in
5 min readOct 24, 2016

With so much competition in the inbox, today’s email marketing is all about relevance and connecting with your audience. Batch-and-blast emails aren’t going to make the cut — you need to be strategic and send emails that your subscribers want. How do you figure this out?

You can use A/B testing to send better email campaigns.

We know that making smart decisions based on data can increase results, but sometimes it’s hard to know where to start. Every year at The Email Design Conference we cover how to produce great looking — and performing — emails, sharing examples of companies that utilize data and insights from a variety of sources to understand their audience and create engaging, unique campaigns.

At the 2014 conference, Mike Heimowitz, Atlassian’s online marketing manager, presented on using A/B testing to learn what resonates with your audience so you can continuously optimize your emails. With this data in hand, you’ll be able to produce better-performing campaigns (and, hey, maybe even make more money!).

What is A/B testing?

A/B testing involves comparing the results of one version of an email (the control) against another version of an email (the test).

When executed correctly, they can give marketers concrete evidence of which tactics work on their audiences and which don’t. There are countless things to test, including headlines, preheader text, ‘from names’, and the like. It’s one of the most effective (and easy!) ways to make measurable improvements to your campaigns.

Setting up a test

When setting up an A/B test, the first step is deciding what aspect of the email you will be testing — is it the color of a button? A graphic? A subject line? Then, since testing is a continuous process, you’ll want to formulate a hypothesis for your test so it’s repeatable. For example, a hypothesis would be “If we use our company name as the From name, rather than a salesperson’s name, open rates will increase because our subscribers recognize the company name.”

By choosing a hypothesis, if the results of your test are conclusive and your hypothesis was correct, then you can repeat that test in the future (and continue to improve your emails!). Once that hypothesis has been determined, choose which type of test you’d like to run. Mike covered three types of A/B tests in his presentation:

  • 50/50 test: Send version A to 50% of your audience and version B to the other 50%.
  • 25/25/50 test: Send version A to 25% of your audience and version B to the other 25%. After a certain amount of time — perhaps a couple of hours or days depending on your list size — send the winner of that test to the remaining 50% of the list.
  • Holdout test: Don’t send 10% of your list an email at all, and send version A to 45% and version B to 45%. Then, look at the conversions of your subscribers. Did those that received version A, version B, or didn’t receive an email at all convert the best? This can help show the effectiveness of email!

The next step is setting up the test in your Email Service Provider (ESP). All ESPs have different testing capabilities — some offer straightforward 50/50 split testing, others offer 25/25/50 testing, others are custom, and some many not have a testing platform at all. However, as Mike stated in his presentation,

Just like a good carpenter can’t blame his tools, a good marketer can’t blame his ESP.

Regardless of whether your ESP has a testing platform or not, you can still set up tests. It may be a more time-consuming, manual process but you can still split your list and send different variations of your outgoing messages.

You’ll also need to identify the data point(s) you are going to be measuring in the test — clicks, opens, conversions? Be sure to set your own goals, and don’t look at industry baselines. Your audience and emails are unique so treat them that way! If you haven’t done A/B testing before, you can use your current open, click, conversion, etc. rates as the baseline for the testing results.

Measuring the success of your test

It’s important to choose the statistical significance of your test. For example, a statistical significance of 98% would mean that if you ran that test 100 times, 98 times you would get the same result.

If the statistical significance isn’t high, then you wouldn’t want to make future decisions based on those test results. For example, if you got a 75% statistical significance for using blue buttons vs. green buttons in your emails, you’d likely want to retest this to see if you can get more conclusive results.

Atlassian uses a statistical significance of 95% on their test and uses this handy free tool to figure it out. In one of his examples, Mike set up a subject line test. Would putting the feature or product first in a subject line result in a higher open rate?

After running the test, he put the results in the A/B significance test tool.

From the subject line test, version B saw a 5% increase in opens, and a statistical significance of 97%. As a result, Atlassian now puts their product name and then the feature first in their product emails. They’ve used this test to conduct other tests — such as whether a hyphen or colon in the subject line performs better.

A continuous process

There is no end game when it comes to testing — you should always be testing! Testing allows you to continuously improve your emails and provide your customers with content that matters. And, the more you know about your audience, the more advanced targeting techniques you can use — like HTML5 video background, CSS3 animations, and typography.

Want more great tips for your next email marketing campaign? Subscribe to Litmus Weekly, a weekly digest of the latest and greatest from #emailgeeks around the world.

This post originally appeared on the Litmus Blog.

--

--

Lauren Smith
Savvy Inbox

Marketer at @litmusapp. Lover of cooking, traveling, wine, and the Oxford comma.