Getting Started with A/B Testing? Run These Five Experiments First

Emily Mermell
Bootcamp
Published in
8 min readJun 19, 2023

Have you ever pushed a change live on your website only to find out that it’s a flop once your conversion rate starts dipping?

I’m pretty sure every marketer has been there, myself included.

So how do we avoid having a stale website without pushing things live on a whim and risking negative results?

Enter: A/B testing.

A/B testing involves comparing different versions of a webpage or element to determine the optimal performer. This allows you to uncover insights that help improve your website’s performance and conversion rates, all while learning a thing or two about your audience.

With A/B testing, you’re empowered to make data-backed decisions when it comes to things like design, layout, content, and overall messaging, resulting in a positive experience for your users and a positive impact on your bottom line.

In this article, we’ll explore a range of quick-win A/B tests that you should prioritize. Let’s dive in and discover how you can optimize your online website for success. But first, let’s dig into what A/B testing even is.

What is A/B testing?

A/B testing, also known as split testing or web experimentation, is a powerful method for comparing different versions of a webpage or element to determine the optimal performer.

By dividing traffic into groups and tracking user behavior and conversions, A/B testing provides valuable data to inform decision-making and achieve business goals.

The impact of A/B testing extends beyond web marketing. In fact, you can use the results of A/B tests to better understand how to create better content, more targeted emails, better-performing paid ad campaigns, and more.

Through data-driven decision-making, A/B testing eliminates subjective opinions and delivers concrete evidence to optimize efforts and enhance user experience. By testing different variations, you can identify elements that resonate with your audience, driving higher conversion rates, refining strategies, and staying ahead of your competitors.

Before we talk about different types of A/B tests, let’s get into what makes a good A/B test.

The making of a good A/B test

A good A/B test consists of four things: a clear hypothesis, a well-thought-out experiment, trackable metrics, and enough of an audience to reach statistical significance.

Let’s dig into each of those elements below.

Hypothesis

Your hypothesis is a proposed assumption or prediction about the expected impact of a change or variation on user behavior or key metrics. It serves as a basis for conducting the experiment and guides the decision-making process.

Formulating a hypothesis is crucial to your A/B test as it provides direction for the experiment, facilitates data-driven decision-making, and ensures meaningful interpretation of the results.

This is what you’ll use to validate whether your experiment worked or not.

Try using the following formula to formulate your hypothesis:

“Changing [specific element or feature] to [variation] will result in [expected impact] on [user behavior or metrics] because [reasoning or hypothesis explanation].”

For example:

“Changing the call-to-action button color to red will result in a higher click-through rate because red is a more attention-grabbing color and can create a sense of urgency among users.”

Experiment

The experiment is what’s being tested. In some cases, this could be an A/B test where you’re testing two variations against each other, but tests could also include more than two variations.

Regardless, your variations should be different enough from each other to accurately identify the impact of the specific changes being put in place. Noticeable differences help pinpoint the factors driving variations in user behavior and enhance the validity of the experiment.

Trackable metric

Trackable metrics are the specific measurements and data points that can be accurately recorded and analyzed to evaluate the performance and effectiveness of different variations in an experiment.

These metrics provide quantitative insights into how users interact with the variations and help assess their impact on key performance indicators (KPIs).

Trackable metrics are essential in A/B testing as they provide objective, data-backed insights that guide decision-making, validate hypotheses, and drive continuous improvement in website performance, user experience, and conversion rates.

Statistical significance

Statistical significance is crucial in web testing as it confirms whether observed differences between variations are meaningful and not due to chance. By ensuring statistical significance, you can confidently make data-driven decisions based on accurate insights.

For instance, if Variation B shows a significantly higher conversion rate than Variation A in an A/B test, statistical significance validates the difference as meaningful, enabling you to implement the winning variation with confidence.

Understanding and applying statistical significance ensures reliable results and drives improvements to your website’s performance.

You can use CXL’s A/B test calculator to determine the necessary results for reaching statistical significance in your test.

Experiment #1: Headline variations

The headline is often the first impression visitors have of your website. Test different variations to find the one that resonates best with your audience. Experiment with tones, lengths, and wording to see which headline grabs attention and generates more clicks.

This is a great way to test what messaging has the biggest impact on your overall conversion rate.

The imaginary scenario

B2B SaaS company, DataSync Solutions, provides software that helps data analysts collaborate better with their teams.

They want to test two headlines on their homepage to see whether their audience is more enticed by the idea of collaboration itself, or the outcome of that collaboration.

The hypothesis

Changing the headline on the homepage will lead to increased user engagement because a more compelling and clear headline will effectively communicate the value proposition of DataSync Solutions and encourage visitors to explore the SaaS tool further.

The experiment

Version A:

“Powerful Data Collaboration for Analysts”

This headline emphasizes the data collaboration aspect of their tool, specifically targeting analysts.

Version B:

“Unleash the Full Potential of Data Collaboration — Simplify Analysis, Boost Insights”

This variation aims to create a more compelling headline by highlighting the broader benefits of data collaboration, such as simplifying analysis and boosting insights. It aims to appeal to a wider audience beyond just analysts such as department heads and employers.

The trackable metrics

  • Click-through-rate (CTR)
  • Time on page
  • Bound rate
  • Conversion rate

Experiment #2: Call-to-action (CTA) button

The CTA button is a critical element for driving conversions. Test different aspects such as colors, sizes, placements, and texts to find the most compelling combination.

The imaginary scenario

The annual search performance conference, OptimizedCon, gathers thousands of web marketers each year for a week-long learning experience.

They want to test the copy on the primary CTA on their registration page to see if their audience is more compelled to click a CTA that’s straightforward or one that instills a sense of urgency.

The hypothesis

Changing the CTA language on the event registration page will lead to increased conversion rates because the more persuasive and action-oriented language will better encourage visitors to register for the conference.

The experiment

Version A:

“Register Now”

This CTA language straightforwardly prompts visitors to register for the conference.

Version B:

“Secure Your Spot Today”

This variation aims to create a more persuasive CTA by using language that emphasizes the urgency of securing a spot at the conference.

The trackable metrics

  • CTR
  • Conversion rate
  • Drop-off rate

Experiment #3: Pricing strategies

Pricing has a huge influence on consumer behavior. Conduct A/B tests to find the optimal price point that maximizes your revenue.

Explore different pricing models such as tiered pricing, discounted bundles, or even removing the dollar sign symbol.

The imaginary scenario

The education company, Leadership Excellence Academy, offers online leadership training for first-time managers.

They want to test two different pricing models to see whether their audience is more likely to register for a bundled course or individual modules.

The hypothesis

Changing the pricing strategy for the leadership course will result in increased conversion rates because a more appealing and value-oriented pricing structure will better align with the target audience’s expectations and perceived value of the course.

The experiment

Version A:

“$499 for the Complete Leadership Course”

This is the existing pricing structure that offers the complete leadership course for a fixed price of $499.

Version B:

“$299 for the Complete Leadership Course or $99 per Module”

This variation introduces a tiered pricing structure. It offers the complete leadership course for a reduced price of $299, making it more affordable for potential customers. Additionally, it provides the option to purchase individual modules of the course for $99 each, giving customers more flexibility.

The trackable metrics

  • Conversion rate
  • Revenue
  • Average Order Value (AOV)
  • Cart abandonment rate

Experiment #4: Testimonials and social proof

Leverage social proof to build trust and credibility with your audience. Experiment with different formats, lengths, and positions of testimonials on your website.

Consider testing different sources of social proof, such as customer reviews, case studies, or influencer endorsements. By finding the most effective way to display social proof, you can enhance your brand’s reputation and instill confidence in potential customers.

The scenario

The company WealthGuard Advisors offers full financial advisory support to professionals working at startup organizations.

They want to add testimonials and case studies to their website to see if their audience will be more likely to book a consultation.

The hypothesis

Adding social proof elements to the website will lead to increased trust and conversion rates because showcasing positive testimonials and client success stories will enhance credibility and persuade potential clients to choose WealthGuard Advisors for their financial advisory needs.

The experiment

Version A:

No Social Proof

This is the existing website that does not showcase any testimonials or client success stories.

Version B:

Testimonials and Success Stories

This variation includes a dedicated section on the website that prominently displays testimonials and success stories from clients who have experienced positive outcomes through WealthGuard Advisors’ financial advisory services. It highlights specific achievements, such as financial goals achieved, investments optimized, or retirement plans secured.

The trackable metrics

  • Conversion rate
  • CTR
  • Time on page
  • MQLs vs SQLs

Experiment #5: Page layout and design

The layout and design of your website significantly impact user experience and engagement. Test different variations to determine what resonates best with your target audience.

Explore the placement of key elements like images, text blocks, and forms. Compare a more traditional layout with a modern, minimalist design.

The insights gained from these tests can help you create a seamless and visually appealing experience that keeps visitors on your site longer.

The imaginary scenario

The SaaS startup, HRWizard Solutions, provides a paid platform that gives HR professionals the ability to work more efficiently.

They want to test the images displayed on their website to see if their audience is more compelled by stock photos or custom illustrations that showcase their software in action.

The hypothesis

Changing the image style on the web page from stock photography to custom product illustrations will lead to increased user engagement and conversion rates because custom illustrations can better align with the brand identity, evoke a stronger emotional connection, and enhance the perceived uniqueness and quality of HRWizard Solutions.

The experiment

Version A:

Stock Photography

This variation uses stock photography relevant to HR and corporate settings. It may include images of professionals in office environments, team collaboration, or workplace scenarios.

Version B:

Custom Product Illustrations

This variation replaces the stock photography with custom product illustrations specifically created for HRWizard Solutions. The illustrations may depict the software’s key features, and benefits, or showcase users engaging with the product in a visually appealing and engaging manner.

The trackable metrics

  • Bounce rate
  • Time on page
  • CTR
  • Conversion rate

Summing it all up

If you’re just getting started with A/B testing, try giving these five experiments a go.

By formulating clear hypotheses, conducting well-thought-out experiments, tracking trackable metrics, and ensuring statistical significance, you can make data-driven decisions that lead to continuous improvement and enhanced user experience.

This allows you to better understand what resonates with your audience, improves conversion rates, and helps you stay ahead of your competitors.

With A/B testing you can eliminate subjective opinions, deliver concrete evidence, and unlock the potential for optimized performance in your digital strategies.

Start running these experiments and unlock the power of A/B testing today.

Happy testing!

--

--

Emily Mermell
Bootcamp

Proud Web Strategy & Growth Marketing Nerd ⚡️