The relation between marketing and a joke

Mark van Kasteren
Kaartje2go
Published in
4 min readAug 22, 2018

Did someone ever tell you that joke about internet in prehistoric times? The moral of the story is important to keep in the back of your head when you work in a data-driven workplace. A pitfall which is unconsciously easy to fall into.

The story

Did you know that during excavations in London, scientists found remnants of a huge network of ropes with cans at the end? The remnants were found at 100 meters deep and were estimated with an age of about 1,000 years old. The English concluded from this that their ancestors had a primitive telephone network in the Middle Ages. Not long after that, the Italians found something similar as well at excavations near the Colosseum. At 200 meters they found copper cables running from house to house. The Italians historians concluded that the ancient Romans already possessed an advanced digital network.

In line with this trend, Dutch researchers also published a report: “After excavations in the soil of Amsterdam, up to a depth of 500 meters deep, nothing resembling a communication network was found at all”. So archaeologists stopped digging and concluded that Dutch people in prehistoric times already owned a wireless network”.

Metaphor

The joke is a metaphor for something that happens often with online marketing. The good thing is that there is a lot of data and testing going around. But when an A/B test or a new campaign did not show the expected results, somehow there is a lot of guesswork on why it failed. Or (maybe worse) how it can still be considered a success despite the fact that the main goal was not achieved. The big problem is that these “theories” are never tested again and we just move on to the next project.

What would you do?

Let’s say there was an A/B test done on your website on a product page where selling points were added in the B variation. The test results showed it did not achieve it’s goal for the new B variation, which was more significant sales. However one of the following things did happen:

(1) Your test did show an increase in average order value.

(2) Your test did show a big decrease in bounce rate on the product page.

What would you do? Chances are there is always a “theory” on why it could still be considered a success straight away.

Situation 1 could have been the effect of the sellings points which convinced people to spend more. However, it could also be that since you highlighted “free returns in 30 days” as a selling point, customers just bought more sizes with the premeditation of only keeping one item. Situation 2 could be caused because the selling points convinced visitors to look around more in your webshop, but it could also mean users found the description of the selling points vague and hoped to find some answers elsewhere on your website. As you can see you can come up with a lot of theories on this.

Human nature

A colleague pointed out that the story has a strong resemblance with the Simpson’s Paradox. In short it’s a phenomenon that comes with presenting statistics. On an overall conversion rate you could conclude that variation A is better then variation B. This conclusion could be totally different if you look at it from a different segment. On all devices variation A on total level could perhaps do more sales then B. But that does not necessary mean that variation A would also do more sales on mobile devices. Variation B could still have a higher conversion rate on mobile. Something to watch out for. And in practice this theory of not deep diving far enough into the data before drawing conclusions goes hand in hand with bad human behavior.

Because going for positive theories also happens because we (unconsciously) want to have a winner after a test. Who wants to disappoint their coworkers with another marketing failure? Or we want it so much to be true that a variation won, because of all of the effort we put into it, we'll take a victory out of the data as fast as we can.

Coincidentally, these are also reasons 3 and 6 why people lie the most…

Playbook

The best approach is not to decide on the rules (e.g. goals) after the finish of an online marketing test. Do this at the start of a test. Brainstorming about results is very helpful to decide on next steps, but a fancy theory should not decide if a test was successful. Agree beforehand that if you have unexpected results, you will repeat the test again. Know that with testing there is always the risk of significant “false-positive” tests.

Don’t make your life more difficult if the same surprising outcome comes again in a rerun of a test. Embrace the fact that nobody has a really good argument why this result keeps coming up. However, always keep your eye on the long term. Don’t forget that data driven testing is just a tool. Follow your instinct with marketing test results. Winners that were expected do not necessarily have to be implemented if your market/customer knowledge tells you that it is not a good idea in the long term. That also goes vica versa. If your gut tells you something is a good a idea don’t stop at the first negative test but think of new ways to test it. It would be quite remarkable that with each marketing test you do, you would hit the bullseye right away. Test, learn and test again!

--

--