From the Experiment Cookbook

Experiment Recipe: Landing Page

How to run a Landing Page experiment for Problem-Solution fit

Erik van der Pluijm
WRKSHP

--

Landing like a boss. Photo by SpaceX on Unsplash

(This post is based on content from the Experiment Cookbook)

What is a Landing Page experiment?

A Landing Page is a simple web page with a single purpose: to convert incoming visitors to (typically) sign up for an email list.

A Landing Page experiment is a type of experiment you can use to validate assumptions in the lean startup methodology. In a Landing Page experiment, a stream of web traffic is directed to the landing page, asking visitors to sign-up. The number of visitors that sign up will tell you if your hypothesis is validated.

When to use this experiment?

Landing Page experiments are very useful when validating Problem-Solution fit. They can be used to find out which solutions your potential customers prefer, and what types of solutions they resonate with. They can also be used to prove that a specific solution proves Problem-Solution fit.

What do you need?

  • A landing page (duh!). You can use different tools to design and build an effective landing page.
  • A source of traffic. You need to get web traffic to your landing page in order to measure performance.
  • Benchmarks. You need some benchmark to compare your results to, so that you are able to decide if your experiment gives a clear signal.
  • A good way to see if your result is significant so you can make decisions with confidence.

Step 1. Design the experiment

The first step is to design your experiment using the experiment canvas, defining your riskiest assumption and falsifiable hypothesis.

Riskiest Assumption

When doing Problem Solution Fit experiments, it is taken as given that you already have established that potential customers actually experience this problem and care about having it solved.

Your next goal would be to solve that problem for them. It is very tempting to jump in and immediately test the solution you have in mind (and you can do that with a landing page experiment) but in many cases it is interesting to try different options and see which ones resonate the most with customers.

A typical Riskiest Assumption at the problem-solution fit stage might be:

We can come up with one or more solutions that solve the problem for our customers in a way that resonates with them.

This is a Riskiest Assumption that can be tested with a Landing Page experiment, by creating one or more landing pages and testing which one resonates the most.

Benchmarks

Now, we need to work towards defining a falsifiable hypothesis for the experiment. To do that, we need to do a bit of research and calculation.

To get the numbers straight for your hypothesis, you’ll first need to know what a ‘normal’ conversion rate is for the landing page. This ‘normal’ rate is what we’ll use as a benchmark. The experiment will try to measure if your landing page is converting significantly better than that benchmark.

Typical benchmarks for landing page conversion are:

  • Average conversion rate for a landing page: 2.5%-5%
  • Good conversion rate for a landing page: >10%

Note: Keep in mind that if you are running your experiment with a ‘fake brand’ or a new brand that nobody knows yet, your rates will likely be lower. Also keep in mind, that a lot of optimisation is usually required to get your conversion rate towards the 10% range.

Without any prior knowledge, a good benchmark to pick for your landing page conversion experiment should be 5%. If you do significantly better, your assumption is validated.

Note: If you have more precise data for landing page performance for your industry or a target audience, you should use it instead.

Traffic

Now that you have defined a benchmark conversion rate, we should talk about getting an actionable result out of the experiment, a result you can base a decision on with confidence.

What this means is illustrated below:

If you send your three best friends to the landing page and they all sign up for the email list, you’d have a 100% conversion rate: a very strong effect.

However, it’s intuitively clear that you can’t make smart decisions based on this result: the three best friends do not reflect the entire population of potential customers. It’s very possible that other visitors will behave differently. You can’t extrapolate from the result you have got with any confidence.

Even the large difference between your 100% conversion rate and the 5% of the benchmark does not allow you to make decisions with confidence. The risk is too great.

If, on the other hand, you’d be able to send a million visitors to the landing page, and you get a conversion rate of 7.5%, that is a much weaker effect. Still, you’d intuitively be able to say with confidence your landing page performs better than the benchmark. There is less risk, because the variation in the population would most likely be well reflected in the effect you measured.

With a larger number of visitors (sample size), you can make confident decisions on much smaller observed effects.

Of course, sending a million people to your landing page as a fledgling startup is very improbable. In reality, in this early stage, you’ll probably end up with a number of visitors that is a lot closer to 3 than to one million.

This can be a problem. How many people do you need to get to the page before you can be confident in the outcome? 100? 1000? 10000?

Calculate the number of visitors

Luckily, it is possible to calculate this number and find out.

Evan Miller made an excellent calculator that you can use to come up with the numbers you need.

It looks a bit technical, but don’t worry, it’s easy to use once you get comfortable with the terminology.

Screenshot of Evan Miller’s excellent calculator

A/B test?

The calculator answers the following question:

“How many test subjects are needed for an A/B test?”

This may at first seem confusing, as the landing page experiment described above is not an A/B test — it is a test to see if the conversion rate is higher than the benchmark. But when you think about it, this can be easily restated as an A/B test: A is the benchmark, and B is your experiment.

Baseline Conversion Rate

This is where your benchmark figure goes. I already added in the 5% in the screenshot.

Minimum Detectable Effect

The magnitude of the effect you want to be able to pick up.

I entered 2.5% here, which means that if your measured conversion rate is below 2.5% (5–2.5) or above 7.5% (5+2.5), you will be able to show your landing page performs differently (better or worse) than the benchmark. If the effect you measure is greater than this minimum detectable effect, the probability that this is because of random chance is low and you can be confident that what you measured is a real effect.

If your experiment’s conversion rate lies between 2.5% and 7.5%, you won’t be able to confidently say your conversion rate is different from the benchmark, as any difference could be the result of random chance.

Note: If you want to be more precise, and, say, be able to detect a difference to the benchmark of 1%, you will see you need a much larger sample size.

Statistical Power (1-β)

This is usually set to 80%. It defines the chance your test actually detects the minimum detectable effect, assuming that effect really exists. In effect, it determines how many ‘false negatives’ you allow.

What does this mean? Let’s say you already know the conversion rate of your page, and it is 7.5% (which coincides with our minimum detectable effect). If you would run that page through the test 100 times, a statistical power of 80% means you would get (at least) 80 positive results (your conversion rate is different from the benchmark). You would also get 20 false negative results, where you would fail to pickup on the difference.

Now, 20% may seem like quite a lot, but false negatives really can’t be helped. There will always be some false negatives. Trying to increase the power to say 95% will really increase your sample size.

Tip: Start with a power level of 80%, and, when you see a positive result, increase the power level in a new experiment.

It means, that if you don’t measure a positive result, you can be 80% confident of the result.

Significance level (α)

Typically, this is set to 5%. It defines the chance that your test detects a difference, when it actually does not exist. In effect, it determines how many ‘false positives’ you allow.

What does this mean? Let’s say again you have a page with a known conversion rate, in this case it’s smack on the benchmark: 5%. If you run the experiment with this page 100 times, a significance level of 5% means that you would get 5 false positive results (your conversion rate is different from the benchmark)

It means, that if you do measure a positive result, you can be 95% confident of the result.

Fill in the numbers

Looking at the experiment, and using the numbers as filled in in the screenshot, you would need at least 1273 visitors to your page.

  • If you get 1273 visitors, and you measure a conversion rate > 7.5%, you can be 95% confident that you have a positive result and you can validate the hypothesis.
  • If, after 1273 visitors, you do not see a conversion rate > 7.5%, you can be 80% confident that you do not have a positive result.

Note: In the calculator screen, it says ‘per variation’, but we’re using the benchmark data as one of the variations.

Target Audience

You’ll need to know who your target audience is for this experiment, and how to find them online. When you’re running this experiment, you should already know a lot about your audience from previous stages of your startup journey, when you did Idea Validation.

Time

You’ll need to set some time limit for the experiment. There are two ways to do that:

  1. Calculate an expected time to reach the required number of visitors
  2. Let it trickle in and see.

I usually prefer the first, but it can be hard to calculate the required time. Because you have set a fixed minimum number of visitors, the time limit is not a very strict criterion in this experiment (i.e. if you go over the time limit, you don’t need to invalidate your assumption), but if you reach the limit and you’re nowhere near the required number of visitors, that is a signal too. Perhaps you’re barking up the wrong tree and find better traffic sources.

Example: traffic from a link on a corporate homepage

  • Homepage visitors per week: 50,000
  • Homepage visitors per day: 50,000/7 = +/- 7150
  • Link clickthrough rate: 1% (based on other links on the page)
  • Landing page visitors per day: +/- 72
  • Days needed: 1273 / 72 = +/-18.

A good rule of thumb is to take 1.25-1.5 times that amount of time to be sure you have enough time, so in this case roughly a month (27 days).

Example: traffic from ads

  • Ad Cost per Click: $2.69 (benchmark from Google)
  • Desired time to run the experiment: 14 days
  • Visitors needed per day: 1273/14 = +/- 91
  • Daily ad budget needed: $245

A good rule of thumb is to set the budget a bit higher, and keep checking if you are at the minimum required number of visitors so you can immediately stop the ads.

Putting it all together: build your Hypothesis

Now that you have calculated the number of visitors you need and the time needed, you can translate this with the riskiest assumption into a falsifiable hypothesis.

Example Riskiest Assumption:

  • We can come up with very different solutions that solve the problem for our customers in a way that resonate with them.
  • This experiment tests one of those different solutions, to see if it solves the problem for our customers in a way that resonates with them.

Falsifiable Hypothesis template:

  • We believe, that a landing page experiment
  • with at least 1273 visitors
  • selected from our target audience by running targeted ads,
  • results in at least 7.5% visitors signing up
  • within 14 days

What if I don’t have access to 1273 people, can I still run this experiment?

Sure you can, and you can even get results: they may just be a bit less useful.

Example:

If you set the benchmark conversion rate to 2.5% (the lower boundary of the average landing page conversion rate benchmark), and the minimum detectable effect to 5%, you only need a sample size of 191. That’s much easier to achieve. Now, if you do measure a conversion rate of > 7.5%, you can still be 95% confident that your hypothesis is validated.

But what you actually now know for sure is only that your landing page is performing better than the benchmark. 2.5% is really on the low end of landing page conversions. Are you happy to stake everything on beating the low end of average?

A good approach might be to first run the experiment with a sample size of 191, and if it is positive or close to positive, run it again with a higher benchmark conversion rate. That way, you can double check your results, and you’ll know if it seems worthwhile to spend more time and money on this.

What will it cost to run this experiment?

Now that you know how many visitors you need, is it worthwhile for you to run this experiment? This really depends on where your traffic comes from.

  • If you already have a source of traffic available and you can redirect some of it, e.g. by placing a link, sending an email, or writing a blog, it is definitely interesting and cheap.
  • If do you need to advertise, you’ll incur costs depending on the type of business you’re in and the keywords you’ll need to use. You’ll need to calculate the cost beforehand.

Example - Advertising on Google Search:

  • Traffic Needed: 1273 (minimum)
  • Cost per Click (CPC): $2.69 (benchmark from Google ads)
  • Cost: 1273 * $2.69 = $3425
  • Clickthrough (CTR): 3.17% (benchmark from Google ads)
  • Reach: 1273 / 3.17% = 4K

As you can see, this is quite an operation, and it will cost you around $3425 to do this with advertising. (Note: the CPC rates vary heavily depending on the industry you’re in. Look at the tables below to get more insight)

For early stage landing page experiments, therefore, it is much preferable to use traffic coming from existing channels. Build that email list, comment on Reddit or Quora, create partnerships and guest posts, put your idea on product hunt, be creative. Find the places online where your potential customers are, and find a smart, creative way to get your landing page on their radar.

For later stage landing page experiments, especially when you already have a product to sell and you are measuring sales conversions rather than signups, it can be worthwhile to use advertising — but please do the math before you start.

Average Clickthrough rates (CTR) and Cost Per Click rates (CPC) for Google Search Ads per Industry (2019)

  • Industry: Average CTR | Average Cost Per Click (CPC)
  • Advocacy: 4.41% | $1.43
  • Auto: 4.00% | $2.46
  • B2B: 2.41% | $3.33
  • Consumer Services: 2.41% | $6.40
  • Dating & Personals: 6.05% | $2.78
  • E-Commerce: 2.69% | $1.16
  • Education: 3.78% | $2.40
  • Employment Services: 2.42% | $2.04
  • Finance & Insurance: 2.91% | $3.44
  • Health & Medical: 3.27% | $2.62
  • Home Goods: 2.44% | $2.94
  • Industrial Services: 2.61% | $2.56
  • Legal: 2.93% | $6.75
  • Real Estate: 3.71% | $2.37
  • Technology: 2.09% | $3.80
  • Travel & Hospitality: 4.68% | $1.53

Source: https://www.wordstream.com/blog/ws/2016/02/29/google-adwords-industry-benchmarks

Step 2. Build the Landing Page

Design the Page

A very easy tool to define what goes on the landing page is the Landing Page Canvas.

Page Building

One (expensive) way of building a landing page is to find developers and build it from scratch.

The cheaper and faster alternative is to use a landing page tool with a visual editor and put the landing page together in that way. There are lots of landing page builders that you can use, with nice templates and integrations to speed up the process. Some are free, but most are subscription based. All work just fine for the purpose of a landing page experiment.

Tools

Most have great guides for setting up your landing page in a few hours at most. Find more here: An overview of 12 landing page builders.

Storing email addresses

In many cases, a landing page experiment is geared towards gathering email addresses as a means to gauge the interest for a product or service. So, when you are building your page, you will need a place to store these addresses.

You can use an email service such as Hubspot, Mailchimp, ActiveCampaign, or Drip to capture these addresses, and use the integrations in your page builder to capture signups directly, or use e.g. Zapier to send the email to a Google Sheet.

Analytics

To calculate the conversion rate, you will need to see how many unique visitors your landing page had. The easiest way to track visitors is using google analytics. Your landing page build tool of choice will have a Google Analytics integration, and will most likely come complete with the steps to set it up correctly.

Here is a how-to for Squarespace’s Google analytics setup

Warning! When you set up analytics, make sure that you exclude your own visits. Especially with low numbers of visitors not doing this can really throw your numbers out of whack.

Step 3. Run the experiment

Launch the page, and start sending traffic over using the means stated above.

Here is a good guide for setting up Google Ads.

Here is a good guide for growth hacking visitors to your landing page without spending a dime.

Warning: Keep track of the analytics on a daily basis, but only draw conclusions once you have reached the minimum number of visitors you calculated. Even if you are at a 60% conversion rate, if you haven’t reached the number yet it’s not actionable data!

Step 4. Interpret the data

Now that the experiment is done, calculate the final conversion rate and compare it to the benchmark.

To get your conversion rate, take the number of signups and divide it by the number of unique visits from analytics.

Warning! make sure to remove any test signups and such before you do this!

Using the numbers from the example above:

  • If you get 1273 visitors, and you measure a conversion rate > 7.5%, you can be 95% confident that you have a positive result and you can validate the hypothesis.
  • If, after 1273 visitors, you do not see a conversion rate > 7.5%, you can be 95% confident that you do not have a positive result.

Now that you know your result, you’re ready for the next step, and you can decide wether to pivot or persevere.

This post is based on content from the Experiment Cookbook, with over 20 detailed recipes for experiments for idea validation, problem solution fit, and product market fit. What is your favourite experiment? Let me know!

Keep experimenting!

--

--

Erik van der Pluijm
WRKSHP

Designing the Future | Entrepreneur, venture builder, visual thinker, AI, multidisciplinary explorer. Designer / co-author of Design A Better Business