Key steps in analyzing the results of an A/B test

Shraddha Gupta
4 min readFeb 3, 2022

--

source: Unsplash

If you have run a couple of A/B tests, you would know that results are rarely what we expect. I'll take the same example that is used in Udacity’s A/B testing course by Google to explain some of the nitty-gritty.

The example is about a learning website, “Audacity”. Audacity decides to change the course description on their course overview page. They think that this change may encourage more users to enroll and complete the course. But, before making that change, they want to A/B test it on a few samples.

The primary metric that they want to track is “Enrollment rate” along with a few other metrics like average reading time, average classroom time, #visitors at each step of the funnel, etc. Before moving forward, let's also take a look at the high-level visitor funnel below.

source: classroom.udacity.com

Let’s say that we ran the test and have results from the two experiences, test vs. control. The results look neutral on the first look. From here, how do we conclude on a launch/no-launch decision? I will not go into data much in this post, but, I will try to cover some of the aspects that may act as a general checklist to be able to arrive at conclusion.

  1. Always, always check that your test and control experiences have a similar number of data points, in this case, #visitors at the top of the funnel. If there is a discrepancy, go back and figure out the reason. It could be because of bot attacks on one of the experiences, some bug in the design. Check with your engineering team
  2. Start by looking at daily trends of your primary metrics. If there are days when your metric is consistently negative (or positive), you need to explain the reason.
  3. Keep a track of any backend as well as front-end errors. For eg, if any particular error is spiking up on the test experience, you need to understand whether that is happening due to some bug in the design or any design flaw. Ensure that the error is not blocking other steps in the funnel
  4. In Audacity’s example, let’s say we find a positive lift till visits to course pages, but then we start seeing a drop from accounts created, enrolled students, and course completion. This means that something might be wrong in the funnel on or after the course page. Some of the things that can be checked to verify that are —

4.1 Check if it’s happening across all platforms where Audacity is running this test

4.2 Check if it’s happening consistently or if a few days are accounting for the negativity. If it’s happening only on 1 or 2 days, find out what happened on those days. Maybe there was some change on a different part of the website, maybe it is overlapping with some other A/B tests and that is causing some interference. Once you have the explanation, you may want to check the results after excluding those days to ensure that rest of the days don't have any other problem

4.3 Check if it’s happening in all regions, all browsers, all languages, etc. Basically, try to narrow down where the problem is by checking different dimensions

4.4 At this stage, if you have the data available, you can also check for the profile of visitors that might be showing this behavior. The behavior might be specific to a cohort of users

4.5 Check if the negativity is translating till the end of the funnel. It will help you understand what the problem is. For example, if the negativity is translating till the end of the funnel, it might be because the course description is misleading and when the visitors see the actual curriculum, they feel that it doesn't match their expectations and don't move forward.

On the other hand, if the negativity is only at that step and subsequent steps in the funnel are positive, it might mean that improved course description helps visitors decide upfront whether they are interested in the course, so they, don't move forward if they aren't interested. And those who do, have a better conversion since the right expectations were set at the beginning.

5. Check your data sanctity. Confirm that there is no data loss

6. Lastly, go through the experience yourself and see each step of the funnel closely. Sometimes, the issue cannot be explained by data, rather you may find that there is some design flaw that is causing confusion among visitors, causing them to abandon the website from a step of the funnel. This comes with some experience and might be difficult to assess in the beginning (first few tests)

Hope this was helpful in getting some basic understanding.

--

--

Shraddha Gupta

Product Analyst | Hungry for learning something new everyday!