3 common mistakes when doing content experiments
Simple heuristics that focus on getting the most out of your (first) experiments
It’s really great to see so many people reach out to me on the topic of online experimentation. Really exciting to see so many companies and professionals that are enthusiastic about the topic. In the last weeks I had a half of dozen meetings and calls with people that wanted to foster a culture of experimentation.
Each time, I found myself focusing on teaching these people a few fundamental principles of experiment design. I’ve distilled these principles from my workshops on persuasive content experiments for websites, which I ran during my time as cofounder of Science Rockstars. We also implemented our PersuasionAPI with many of the largest companies in Holland and Europe
Here are the mistakes I have seen made most often when starting with experimentation. Hope you enjoy! And let’s push forward both on online experimentation and expanding the culture to business experimentation.
#1: Test content that is not based on sound scientific evidence

We NEED to discuss shopping carts… We actually don’t need to, but we could use it as an example. Why not, right? What is the scientific evidence behind using a shopping cart in ecommerce?
When I discussed experimentation with someone from an energy retailer recently she told me they tried to use a shopping cart as well. Because in other settings it has proven successful. But sadly it didn’t work in her case.
There was an awkward silence on the phone. After a while I asked “What did you try to test?”.
“Whether people would buy more if we offered a shopping cart.” She replied.
To me that is not enough foundation of a hypothesis so I continued discussing how I would use a shopping cart. The most obvious hypothesis I could come up with, that you can make it easier to commit to something (see Cialdini) by adding a shopping cart to the process.
In this case you would put the Energy “product” in the shopping cart, and you can continue to shop and always take the product out of the cart. Committing to a small step and making it thus easier to finally commit to buying.
She thought this was very interesting. What is a good thing. Her team didn’t connect its hypothesis to a scientific finding or concept. “shopping carts make you buy more” is not something I have found in the online marketing literature.
Actually there is some research that shows using a shopping cart in retail makes people buy more, and probably also bigger shopping carts make people buy more. But confusing the metaphor of a shopping cart with a real one would be a mistake.
The Paco Underhill studies in retail focus on the benefit of a shopping cart (or basket) over keeping products in your hands. Online you don’t have any problems carrying products around while you continue shopping. So the use of a shopping cart in both contexts is different.
Adding a shopping cart in the online context doesn’t clearly address the reason why someone would buy more. And it doesn’t link to scientific studies. By connecting your experiment to a well researched concept, your hypothesis becomes more relevant. And it opens up even more hypotheses that you could test later on.
Another hypothesis could be that by putting something in your shopping cart you would feel a (beginning) sense of ownership. There is this scientific concept of the Endownment Effect that fits with creating a hypothesis like that.
Both hypotheses have a starting point in behavioral science, which is great. But as we will shall see later is that both concepts also are rather complex and (common mistake #2) we could do better if we would start with making our hyptohesis more straightforward.
Do you like what you have read so far? Get a quarterly update of what I am busy with.

#2: Test concepts that are too complex (or unclear)
In the early days when I was training people at Booking.com I used different sets of “innovation cards” (Mental Notes, Design with Intent Toolkit & Brains, Behavior & Design) to inspire new content ideas.
On each of these cards there is a summary of a psychological principle and how you could apply that to content. The very first trainings I did I used all 3 sets to inspire new ideas. I had an enormous pool of concepts and principles that people could choose from. I thought that was a good thing.
Although it wasn’t difficult to apply the different concepts and principles, translating it in a clear experiment was less easy. Let’s take the examples of the Endownment effect and commitment mentioned earlier. These are some of the more complex concepts you can use to create content experiments.
Actually a lot of scientific work is more complex to translate into content.
Let’s take also return that as an example and let’s go back to the shopping cart example. And just for fun let’s use the lean startup hypothesis template for the endownment effect:
We believe [a person who visits our website] has a problem [shopping]. We can help them with [giving him a sense of ownership before he really owns the specific product - adding a product to a shopping cart increases the sense of ownership]. We'll know we're right if [those website visitors will be less likely to abandon the things in the shopping cart]
And for commitment:
We believe [a person who visits our website] has a problem [shopping]. We can help them with [making shopping easier by cutting it in some smaller steps- adding a product to a shopping cart will ask a lower level commitment]. We'll know we're right if [those website visitors will be more likely to buy in a next step]
We can see that the shopping cart feature could be used in both hypotheses. And the hypotheses are very similar. Teasing out causality there is rather difficult. It’s a cool challenge, but maybe not something you want to start with.
Luckily for us, there are also pretty straightforward concepts. For example Social Proof: If other people do it, you are also more likely to do it. Translating that into content is rather easy, and I haven’t seen many mistakes made there. Just because it’s way less ambiguous.
Check out this:
We believe [a person who visits our website] has a problem [shopping]. We can help them with [making shopping easier by showing the most popular product (for his/her profile)]. We'll know we're right if [those website visitors will be more likely to buy in a next step]
If the most popular product also happens to be an staff’s pick, or on discount until the end of the week, it get’s more tricky. But in general I advise companies to start with those concepts that they are very sure of, and that are as little ambiguous as possible.
#3: Test changes that are too subtle

So, there is a reason why with workshops we always start with social proof, authority and scarcity. Not because we are big fan of them (we were though), but because of the clear distinction between them and the clear translations to content.
Good designers sometimes seem to have this quality about them, that they sense what a user desires and they translate that in very subtle design aspects. I am not having that. When we design experiments it needs to be crystal clear what is being tested.
In the above example the visual is cueing for social proof, as is the headline and even the button shows this is the deal that most people take. It’s not subtle. It’s pretty blunt. And I think that’t the way to look at running experiments. Make them blunt.
Changing just one word. Add some more white space. Adding different pieces of persuasive content on the page. Emboss the award a little more. NOT HAVING THAT!
Adding a lot of white space. Yes.
Why not change entire paragraphs? And focus on one persuasion strategy to start with. Finally, let’s forget about embossing altogether.
Cool sciency sounding conclusion
My friend Dirk Franssen PhD (former experimenter at Philips, but now working for Web Art AG) pointed out this key take away that I would like to channel:
Always try to understand the determinants (rational or emotional) of online behavior because this is the fundament of your experiment. For example, a shopping cart might not be a relevant determinant. And that’s why a shopping cart wouldn’t be the best start.
We judge determinants on importance (how much unique variance does this determinant cover) and changebility (can we change this determinant?).
AWhen you know the right determinant you will have to find out which behavior change strategy (endownment, social proof et cetera) has the most effect on that determinant.
“The fundament for your CRO experiment = determinant x behavior change strategy.” Dirk Franssens, Web Art AG
Oh yeah, just to be clear. These tips aren’t a guarantee for success. Even when I ran the social proof example for the airline (screenshot above) I got zero result.
But that my friends, is something I would like to discuss in a later post.