# You can select either version A or version B — test both and keep the best

## A messy piece about Conversion Rate Optimization (CRO), A/B Testing and how you can get inspired from both to improve your life

#### Unintended experiment as a source of learning

It required an experiment to make me truly understand the importance of CRO (Conversion Rate Optimization) and its subset A/B Testing. To be frank, an unintended experiment.⚗️

As part of a sales plan to boost a startup’s revenue, I have reached out to hundreds of marketing professionals and brand managers in a somewhat unorganized and ad-hoc fashion.

Having sent hundreds of messages and received only a few responses, I got curious about which approach was the most favorable (in this case, starting a conversation).

There were plenty of variables: channel (LinkedIn and e-mail), subject lines, message content, length of the message, follow-ups, timing, and so on.

I did a quick-and-dirty analysis of the outbound messages and untapped great insights (more on this towards the end of the article).

This inspired me to learn about CRO and incorporate the concept into my daily tasks.

#### Conversion Rate Optimization, a foggy concept

Let’s start by demystifying the term CRO.

It is the process of testing different versions of an element (e.g. website page, button color, the title of an article) to find out which one is the most favorable (e.g. which one gets the most clicks by website visitors, which one leads to the most revenue).

One subset of CRO is the A/B Testing, which consists of setting up two different variations of a web page or landing page and send equal amounts of traffic to each. It is different from the Before and After test.

Foggy still?🌫️

To give you a simplistic example, imagine you have a website that sells items.

The website displays a different color for the “Buy” button to test its effectiveness.

Out of the 100 visitors:

• 50 see a red button (group A)
• 50 see a green button (group B)

You collect the test results from this A/B Testing and realize that 5% of Group A clicks on the button, while 20% of Group B clicks (unrealistic scenario).

Oh, yey! That easy? Let’s try different colors now and adopt this CRO technique. 🆎

#### Slow down, cowboy!

Are you sure that testing with 100 people is enough to draw conclusions? Perhaps you would get different results with a thousand users.

Are you sure that the change in colors is what pushed Group B to buy more cookies? Perhaps Group B users visited the website with the intent of purchase.

Are you sure that Group B is not composed of website visitors from Group A? Perhaps user number 1 is the same as user number 80.

As you can imagine, CRO doesn’t come without its implications (or complications). There are many variables you have to keep in mind when you make decisions based on experimentation.

Running experiments is easy, but getting valuable results out of them is not. 🔮

#### What shall I use CRO for?

To know what you are trying to optimize, assess your business goals and try to understand how you would convert those into marketing goals.

Say for example, that your business goal is to increase revenue (duh), and one of the key factors is the number of Facebook shares you get per article you publish. The more Facebook shares you get, the more website visitors, and the more you sell.

Your marketing goal in this scenario is to increase “share per article”, and your objective is to have the maximum number of shares per article.

Don’t let yourself be dragged into implementing a “trial-and-error” mindset, use optimization instead to pull the levers to boost revenue.

The benefit of using CRO is that you are no longer banking on gut feelings only. Data speaks on your behalf. Therefore, you have to make sure that the data you come up with is not biased or poorly collected.

Educating yourself on terms such as p-value, the null hypothesis, sample size, and statistical significance will help you get a clearer idea of what makes your data legitimate. To make sure results are useful, you must have a basic understanding of statistics not to fall for common traps. 🕳️

It is advised not to dive into CRO and the A/B Testing straight away until you learn how to interpret, analyze, and understand the results. Also, keep in mind that these practices are not foolproof, and can inherently be wrong.

#### What else does CRO entail in addition to the A/B Testing?

The A/B Testing (also called “split test”) is not the only component of CRO. There are multiple ways you can follow to start to understand how users interact with your platform and what roadblocks they face.

Click heatmaps

It shows you where users click the most and how they do navigate. With the help of colors, you can see which areas on your website are most focused on by visitors, and which ones are ignored.

For instance, if you notice that most visitors leave your website without going past the first page (high bounce rate), you may wonder what is going wrong.

If a big chunk of the users clicks on a non-clickable element of your page, therefore raising frustration, they will very likely leave the website. You just spotted a bottleneck in the interaction with your site. 🐭

Scroll heatmaps

It shows you how users scroll on your page. You will be able to optimize for web length and spot what hidden components of your web page should do surface.

If you want the website visitors to take action with a button in the lower part of your page (e.g. “Buy button” button), you will be able to see if they do reach with a scroll map. 🕯️

Session recordings

Think of this as a hidden camera on your webpage. You will be able to see screen recordings of users and analyze where they click, how they scroll and interact with your webpage or app in general.

While this methodology is time-consuming (watching recordings one by one), it will give you an in-depth understanding of how users interact with your website. You might spot behavior you otherwise would not see, being a regular visitor to your own site. 📹

#### What I have learned from an unintentional but fruitful experiment

Reflecting on my initial outbound sales strategy, I realized what worked and what did not.

Here are a few key lessons from the messy experiment that pushed me to look into optimization:

• Do not assume everyone reacts similarly to a well-written message. I sent a tailored and original message to a lead in a specific industry and got a response almost immediately. Assuming I would get a similar interaction with professionals in the same industry, I contacted more than 10 expecting a positive response. None responded.
• LinkedIn InMail results in poor response rates. I have tried reaching out to leads both via LinkedIn InMail and e-mail addresses. The latter performed better.
• The more specific and customized the message is, the higher the response rate. I have experimented with both tailored messages and generic messages. As you would expect, generic messages delivered close to no responses. Tailored messages with statistics about the industry were more effective. Messages with statistics about the company in question outperformed the rest.
• Following-up always proved to increase the response rate.

Because of the ad-hoc nature of this “experiment”, I cannot share numeric results.

A few weeks from now, I will run a new structured test and will build a case study to share insights with you. 📜

Follow my account here on Medium if you want to be notified as soon as I publish it.