5 Quick and Dirty Tricks for A/B Testing on Facebook Without Wasting a Penny

Know anyone who will find this useful? >> click to tweet

You know how important A/B testing is — you’ve read all the case studies about millions of dollars made and maybe had shocking results of your own. You even sleep with a copy of Lean Startup under your pillow. But testing takes a lot of time and money — are you sure that you’re setting your tests up in a way that maximizes learning given your limited resources?

I came up with the below tricks as a quick and dirty way to solve exactly this problem for our clients at Ladder. I thought I’d share it, as there are plenty of people out here who can’t afford to hire someone to do this… and those people need free money-saving tips more than anyone! I’ll walk through Facebook ads specifically but you can apply the principles to any platform.

Test Design

Before you start testing, have a clear idea of WHY you’re doing it. Good questions are “How should I price my product”, “What audiences are most interested?” or “Is loss-aversion or gain-seeking a better hook for my ad-copy?”. Try to focus here — you’re wasting your time and money testing 50 shades of blue unless you’ve got millions of users and dollars (as you’ll see below).

How do you figure out what you can test with your budget and still get a result you can trust? You can read up on ‘Statistical significance’, but I find a good rule of thumb is to try for 30 observations per variation. Ideally an ‘observation’ is a purchase or signup — as close to revenue as possible. Now for a quick calculation using your best guesses:

  • (3 Image) x (3 Copy) x (3 Audience) = 9 test variations
  • ($10 CPM) x (1% CTR) / (5% CVR) = $20 cost per conversion
  • $20 CPA * 30 observations * 9 variations = $5,400 total budget

Observation Trick

Expensive right? Don’t have that kind of money? Try running the same test, but concluding based on what variation gets the most clicks (instead of conversions). This would cost 1/20th the amount — only $270 total — because getting to 30 clicks costs $30 per variation, vs $600 per conversion. Of course you should trust the result less, because some variations are prone to driving clicks that will never convert… so use your best judgement.

Bandit Trick

Another way to save budget is to bandit test — dropping variations as you go, based on early performance. You’ll be more likely you’ll drop a variation that could have performed well (false negative), but it costs less and will get you to a good enough result quicker. Facebook already uses this to automatically find the best ad-copy in an ad set. So if you need performance more than rigour, dump everything in one ad set and let the Facebook gods decide.

Precision Trick

What if you’re testing on Facebook but the results have wider implications? For example trying tag-lines to use in a TV ad, or testing whether a startup idea is worth pursuing? If a little extra spend now can potentially save you millions down the line, it pays to be rigorous. In this case, your best bet is to set up one ad set per variation. This gives you the best shot at serving each variation to an equal amount of people and arriving at a valid result.

Deprivation Trick

Another neat trick? Deprivation testing. This works well for anything that’s difficult or impossible to A/B test side-by-side, like app store copy or the price of your app (it’s also one of the few ways to conclusively test SEO tactics). Here’s how it works:

  • Variation 1 = weeks 1 & 4
  • Variation 2 = weeks 2 & 5
  • Variation 3 = weeks 3 & 6

Every week for 6 weeks we’d cycle through the 3 variants, starting fresh by re-creating the campaigns to erase history and make it a fair test. This way you can parse out the relative performance without worrying too much about the effect of a single ‘bad’ week.

Geography Trick

This last one is a more advanced (but time-consuming) version of deprivation testing. For it to work, you have to get a list of geographic areas (zip codes / states / DMAs) and randomly assign them to each variation. Each needs the same number of areas and a similar total population size:

  • Variation 1 = location 2 & 6 (53.5k pop.)
  • Variation 2 = location 5 & 1 (51.2k pop.)
  • Variation 3 = location 8 & 9 (49.9k pop.)
  • Variation 4 = location 10 & 4 (52.1k pop.)
  • Variation 5 = location 7 & 3 (51.7k pop.)

You should also choose a specific ‘control’ population, that won’t get any treatment — to compare the results against. This is much more trustworthy method than simple deprivation testing, which is why it’s commonly used to measure the ROI of TV advertising.


THANKS FOR READING :-)

If you found this useful, others will too >> click to tweet
If you need someone to do this for you >> visit ladder.io
If you have questions then tweet at me >> 2michaeltaylor