3 Easy Ways To Completely Ruin Great Data

Geoffrey Yu
Marketing And Growth Hacking
5 min readSep 25, 2018

“What do you mean it’s doing worse?”

I had spent the last few months building an advertising program for a B2B business…

And it had been kicking ass.

The landing page I wrote was beating the pants off the control page they’d had running for over three years.

3X more people were converting into leads on the new page versus the old one…

Or they were, until a sudden drop.

“The numbers just tanked,” my counterpart with the company said, baffled.

No change. No rhyme or reason. The page that had been outperforming the old one for MONTHS just… stopped outperforming it. Instead, it was doing WORSE than the control.

And nothing we did could fix it.

Because in truth?

It Had Nothing To Do With Us.

Nor with any one action anyone did.

It had to do with SEASONALITY — something that absolutely ruins not just my landing pages, but many, many split tests.

I’ve told you this story because I wanted to drive home the point that sometimes you can do everything right… and just not take into account a variable that WILL screw you.

Learning those variables means studying and experimentation.

I think it’s utterly fantastic that more people these days are running split tests. That they’re more invested into data-driven marketing.

But they often do it wrong, and end up with data that doesn’t make sense.

There are three incredibly common mistakes I see ALL THE TIME.

The first:

#1: Not Considering That Time Itself Is A Variable.

Often, when setting up a side-by-side split test is difficult, I will see marketers try to wing it.

Yeah, we made this change two weeks ago. And as you see, our conversion rate for this period is greater than the period two weeks ago.

Therefore, this change made our conversion rate go up…

Right?

Bzzt.

These tests don’t take into account that demand and how people act are DIFFERENT on different days. Weeks. Months.

Exactly as illustrated in the story above.

In this case, the audience “switched out” for another audience near the end of the year. One group of people stopped clicking the ads… and another started.

The new audience was low investment and low research. The longer page I wrote that performed so well just didn’t appeal to them.

So we gave them a shorter one.

When I asked the company to re-implement the longer page in Q2 of the next year…

Surprise, surprise…

It Outperformed The Shorter Page… And By The Same Margin!

Like retail, almost all products and services have their own peaks and dips. You just don’t know what they are until you run into them.

Even certain DAYS and certain WEEKS will cause variances.

Any test where you’re comparing a period of time to another is inherently flawed and will likely contaminate your data.

This is why any split test you run should be done for at LEAST a week. Track a new split test day-by-day and you’re going to see the winner swap around so quickly your head will spin.

Another common mistake…

#2: Testing More Than One Thing At Once.

Now when there are two ways to test something.

One, testing EVERYTHING. Meaning you compare one page and a second one that’s completely different in every way (but that you think would work).

Two, just testing one variable — such as the headline, the copy, or the design.

When you test two or more without being aware of it… problems happen.

Let’s say we test a new page with a new headline and a new call to action…

This new page performs better than the control.

Which was responsible for the improvement? The headline or the call to action?

It’s impossible to tell for certain. The only thing you can do is ASSUME — which is absolutely antithetical to getting great data.

While you think the headline would be the bigger factor… it could be that the headline decreased response, while the call to action increased it past that.

Most commonly, people won’t test these variables separately to confirm or deny their assumptions — they’ll just use those assumptions going forward.

Not great.

And then…

#3: Testing To Different Audiences.

Say you have a bunch of people who joined your email list because of your paid ads. You have another group that found you organically.

You want to test two different subject lines and find out which is most effective…

So you send one subject line to one list, and the other to the second list.

In no scenario will this give you accurate results — because you, again, have no way of distinguishing what affected the final test results.

Was it the message? Or was it the audience?

I see this error more in email marketing than anywhere else… but it’s common in general.

People will also extrapolate results from one audience and assume it will perform similarly for a wider, probably completely different, group.

That’s not how it works.

Who you are talking to is one of the biggest variables you can affect. Take that into account.

The Ideal Way To Test…

Test one thing at a time. If you’re split testing, make sure both pages/emails/messages are getting an equal amount of traffic and going out at the same time. And make sure they’re going out to the same people.

This will give you the MOST accurate results — and better data.

Data is the foundation with which we make all our assumptions that lead to optimization and growth.

Without it you might as well be blind. Bad data can be even worse than no data — misleading us into thinking we know more than we actually.

Skip the shortcuts — your KPIs will notice.

Read this far? A favor, if you would…

Whether you agree with what you’ve just read, or just want to explain to me how utterly WRONG I am…

Comments, claps, and shares make my day.

This is the totally shameless tip jar on the counter… and my end of article call-to-action (because I try to practice what I preach).

Thanks for reading The Marketing & Growth Hacking Publication

Follow us on Twitter. Join our Facebook Group. Subscribe to our YouTube Channel. Need a sponsored post written? Contact us.

If you enjoyed this story, please recommend 👏 and share to help others find it!

--

--

Geoffrey Yu
Marketing And Growth Hacking

Telling stories about growth! CRO consultant and chief gastronumericist at NumberGlutton.