AB Testing — A revolutionary technique for digital marketing & game changer for testers to add value to the product DIRECTLY!
A/B testing which is also known as SPLIT testing or BUCKET testing is an experimentation technique use to assess the usability of a certain feature. It is usually conducted on 2 or more variants of same application assigned to different user base. It helps to identify the user interest hence helping the product to be more improvised on user’s needs.
Why do we need AB Testing after all?
Before going into that, let’s discuss how a product becomes more popular & usable? How do you derive new opportunities in your business? How you can make it more profitable. The answer would be the effectiveness of product & it’s matchless marketing but using your marketing strategies in the right way is what sets a product apart.
Now the question here is, how to know the best usable version. That’s where the concept of AB testing fits in because you can’t wild guess the expected response. You will need enough real user data to understand your audience’s perspective. You run different experiments on different user base to find out which way is the best way to go with. Thus, you can evaluate your conversion funnel and marketing campaign is to get data directly from your customers.
How does it work?
AB Testing can be carried out on both web & mobile applications. For a certain feature, screen, web-page or app, two or more variants are created. One with different experiment & one with another. It could be a design-based experiment, functional experiment or experiment for optimizing clicks & conversions.
Following steps could be used to conduct the split testing:
1- Identifying areas of visitor activity:
First, we need to identify the areas of visitor activity which needs to be improved. For instance, an e-commerce website analyzed the user traffic & found out that user visits the app, add items in their cart but a major chunk of users leaves the site when it asks to sign-up. So, the registration flow is where we need to improve in order to gain more & more customers.
2- Create variants on basis of hypothesis
Taking example of a registration flow for hypothesis, we work on two possible solutions. One flow uses email to register while other uses phone number OR create a registration form with fields that input personal information of the user while other form takes card or COD information only. Using 2 different approaches to create 2 different variants. Variants can be multiple & can be totally different in nature & objective.
3- Create user base for variants:
Once the variants are created, the next step is to create different userbase. User base is actually a group of users generally categorized on the basis of similar interest or for same cause. For e.g. grouping users who opt for COD (cash on delivery) & another one who opts for card payment. Or grouping users on basis of same age brackets, belong to same region or belong to same profession etc.
4- Testing variants with funnels
Now in order to get data statistically, we will create funnels before testing activity. The funnels are like checkpoints in your application to monitor the traffic. These checkpoints can be any user action point, landing page, form submissions etc.
5- Derive results & making the right choice
Analysis is the key here. After all the data has been gathered, we need to analyze the results for making the right choice. The data is now systemically organized, plotted, metricized & driven to a conclusion. The decision would be made on a win-win variant.
Tips for AB Testing:
Here are some pointers to consider in order to make sure that the experiment is well planned:
What to test — Design & Layout, Headlines & copywriting, Forms, CTAs (Call to action), Images, audio, and video, Subject lines, Product descriptions, Social proof, Email Marketing, Media mentions, Landing pages or Navigations
Milestones to Achieve — Improved conversion rates, more user traffic, higher number of views or subscription, higher number of downloads, improves sales, improves time on page & lot more.
Iteration — It’s not a one-time activity & needs iteration for certain number of times for certain variant. This will ultimately lead to a divergence or convergence point through metrics.
Tool Selection — Identify the right tool which best fit your testing experiment. Different testing tools provide different kinds of testing expertise. You just have to choose the one which matches your requirement.
Tools for AB Testing:
Here are the few popular tools which can be used to conduct AB testing:
· Convert Experiences
· AB Tasty
· Sentient Ascend
· Google Optimize
· Apptimize (For Mobile apps)
Overview of Google Optimize:
For an example to show here, I am using Google Optimize (free tool) to show you how to set up experiments.
1- First you need to set up an account on Google Optimize in order to start. After login with any of one yours existing Google account (or create a new one), It will ask you to add browser extension:
2- Now, it will set up a container:
3- Start with creating an experiment, I have picked up a sample site & my hypothesis is that the main page heading should be something catchy in order to get more conversions.
4- After adding the target URL, I’ll proceed to create variants. I made one variant where I changed the title & text of a button:
& here we go:
5- Select the target audience; Google Optimize offer various option for this purpose:
6- Google Optimize use Google Analytics to gather the data.
So, after you link your changes to analytics, you can now deploy your code:
7- Setting the objective as ‘Improve conversion rate’ we’ll proceed to further Settings.
8- After the experiment has been rolled out for certain time span & for targeted user base, it’s time to rule out the winner of the experiment. The reporting tab will help you to view the results:
NOTE: The above demonstration is just a quick overview. Google Optimize is itself a very thorough tool to work with. In order to explore it more, here is the link to view a short web series on this tool, which is really helpful.
Mistakes to avoid in AB Testing:
1- Invalid hypothesis & using someone else’s app statistics to derive yours app hypothesis can lead to weaken you experimenting ground. The results might not be that helpful, hence wasting the time & effort to carry it all the way
2- Taking one pain point at one time — Avoid using too many testing elements together in a test. It might affect the accuracy of the results. Break the elements is smaller groups & pick one at a time.
3- Using unbalanced traffic, picking incorrect duration & not considering the external factors can be the reason to the failure of your experiment. These indicators play pivotal role in this activity hence needs to be taken care of.
4- Not using the right tool can be a risk too. The tools are customized for various purpose. Proper research & selecting the best fit is what makes it effective.
Today, many popular industries are using AB testing as their power tool to increase the audience viewership, subscription & readership. Most talked about are Netflix, Amazon, Discovery, Booking.com, WallMonkeys, Electronics Arts (SimCity 5), Careem. There are numerous other multi domain industries which are using the AB testing technique to make to get best possible results.