Three Steps for A/B Testing in Marketo

Caroline Graham
SmartBug Media
Published in
6 min readMay 20, 2020

To start A/B testing in Marketo, it helps to understand the math, understand the tool, and understand what your tests can teach you.

(If you need an intro to A/B testing, pause here and read our blog on the 5 most frequently asked questions about A/B testing. Then, dive in!)

1. Understand the Math

In an A/B test, marketers test two versions of creative or copy to see which version drives the most conversions. When testing variables, we are asking two questions: First, “Did this change make any difference?” And second, “How can we be sure?”

Consider these two scenarios.

Scenario 1: Version A has a 100 percent conversion rate, and Version B has a 0 percent conversion rate. You feel like you hit the optimization jackpot!

Scenario 2: Version A has a 4 percent conversion rate and Version B has a 3 percent conversion rate. You feel less confident, but decide that higher is better.

If you are new to testing, you may assume that Version A is the winner in both, but without knowing the audience size, you would be foolish to bet on either.

In A/B testing, audience size is key. Say Scenario 1 had an audience size of two and Scenario 2 has an audience of 2,000. In that case, the increase from 3 to 4 percent is much more trustworthy than the 100 percent increase that had you smiling.

The smaller the sample size, the more likely that outside factors are actually responsible for outcomes. If your sample size of two includes just your mom and your ex, for example, there’s a good chance your audience demographics impacted conversion rate more than the fonts you were testing.

Many researchers recommend a minimum audience size of 10 per variable. Beyond that, your audience size will depend on how sensitive you want to be to variations and how confident you want to be in the results. Here are two calculators to play with:

The audience size you need for an A/B test can vary from hundreds to hundreds of thousands depending on the amount of variation you want to measure. If you want to include more than two versions (a multivariate test instead of a strict A/B test), expect the number to go up.

Once you wrap your head around the math behind A/B testing, it’s time to set up your test in Marketo.

2. Understanding A/B Tests in Marketo

Marketo is an excellent A/B testing tool for beginners and experts alike. You can test just about anything-email sends, landing pages, and even emails-within ongoing drip nurture programs.

As you orient yourself, be aware of Marketo’s naming quirks. The testing option is called “A/B tests” for standard email sends, “Champion/Challenger” for nurture programs and triggered emails, and “test groups” for landing pages. All three tools give you the option to run either a simple A/B test or a more complex multivariate test.

A/B Tests for Email Sends

The Marketo Email Program is ideal for batch emails. It includes a built-in A/B testing feature that runs your test with a portion of your audience and then automatically sends the winner to the rest of your audience.

Here’s how it works: You decide which percentage of your total audience to include in your test group. That group will receive one of your two email versions. Marketo waits to see which version did better, then sends the winning version to the rest of your audience.

Remember that audience size is key. If you’re short on people, you can always put 100 percent of your audience in your test group to be more confident in the results. Even though half of your audience will not get the winner, you can still use what you learned to inform future sends.

You can A/B test by cloning your email, editing whatever you want (copy, subject, sender, and so on), and creating two different versions. Or you can do a quick-and-dirty test based on sender, subject line, or date and time without having to make a copy.

Champion/Challengers for Nurture Programs and Triggered Emails

Unlike many competitors, Marketo allows you to test not only batch emails but ongoing triggered emails as well. The Champion/Challenger feature makes it easy to constantly improve the content in engagement program streams or in emails that are triggered when people fill out forms.

Because the timing for these programs is already set, you can’t A/B test date and time. But the other A/B test types are the same as above: subject line, sender, or two different email versions. The existing version in your program is your “champion”; the new version is the “challenger.”

Champion/challenger emails are part of ongoing programs, so it will take longer to reach a significant audience sample size. You can schedule regular alerts to keep an eye on how your emails are performing. Then, you can choose when to declare a winner. Marketo will swap the winner into your program on a date you select, saving your work while keeping you in control.

Test Groups for Landing Pages

Marketo also allows you to A/B test landing pages. To test landing page variations, you will clone your page, make your changes, and create a test group to house your variations.

Marketo will track page views and form submissions from each page at the test group level so you can compare. As with a champion/challenger test, you decide when to end the test. This means you should use a statistical significance calculator to make decisions about your results.

Good to know: In reports showing performance across all of your landing pages, both versions in your A/B test group will roll up together for a clearer view of your data.

Once your test is done, it’s time to assess the results.

3. Understand What Your A/B Tests Can Teach You

Congratulations, your test was a runaway success … or was it?

We’ve all been there.

You set up the perfect A/B test, and instead of getting a clear winner, the results are inconclusive. Do you log those results and learn from them? Or do you simply neglect to tell your team that you ran the test at all?

Recently, scientists did a study about, well, studies. They found that scientific journals were more likely to publish papers about experiments that proved their hypotheses than those that did not. This bias means scientists lose out on knowledge that is not exciting but may still be valuable.

Marketers are guilty of the same thing. If we see statistically significant results, we pat ourselves on the back for our strong instincts. On the flip side, we see our inconclusive tests as failures.

We shouldn’t. Early in my career, a veteran marketer told me to see every test as a success. Why?

Maybe your results seemed promising-you are 85 percent confident that one variable did better-but they don’t quite rise to the level of statistical significance (usually 95 percent confidence). In that case, this mentor pointed out, you can still act on the data. You may be wrong, but you at least have a better idea than you did going in.

If your two versions perform about the same-if you are about as confident as you would be for a coin toss-you still learned something. Maybe you learned that both of your creative options are still on the table for future campaigns! Or maybe you learned to go big or go home with your tests and not sweat the small stuff.

Whatever you learn, you will not regret getting in the habit of regularly testing your campaigns. Doing so will surprise you, keep you humble, and give you a competitive edge as a savvy, rigorous marketer.

Originally published at www.smartbugmedia.com.

--

--