Analyzing A/B Test Results: Data-Driven Decision-Making with Google Tag Manager and Google Analytics

Yunus Emre Tat
Trendyol Tech
Published in
7 min readJun 26, 2023

--

In the previous section, we discussed how we set up and implemented the A/B testing infrastructure in our TrendyolAds platform. A/B testing has provided us with valuable insights, allowing us to refine our product and enhance the user experience. Now, we shift our focus to analyzing and interpreting the results of our A/B tests, enabling us to make data-driven decisions that further optimize our TrendyolAds products.

In order to facilitate A/B testing and analyze user behavior on our TrendyolAds platform, we utilize Google Tag Manager (GTM) in conjunction with Google Analytics.

We use GTM for the following reasons:

  1. Customization and Flexibility: With GTM, we can create custom user-defined variables, such as AB Test 1and AB Test 2 to store the specific test information from the SellerAdsABTests session variable. These variables allow us to access and utilize the test information in a structured manner.
  2. Event Tracking and Analysis: By configuring GTM tags, we can track specific events on our website, such as button clicks or form submissions, and send this data to Google Analytics. This enables us to analyze user interactions, measure the impact of different tests, and gain insights into user behavior.

In our A/B testing process, we store the SellerAdsABTests variable in the user’s session with the format TestName_RandomValue. For example, let’s consider the value A_12-B_4 where A represents Test A with a random value of 12, and B represents Test B with a random value of 4.

Example of keeping our value in a cookie on the web

To extract the value from the session in Google Tag Manager, we create user-defined variables like AB Test 1, AB Test 2, AB Test 3, abtest-a abtest-band abtest-c. These variables are configured to retrieve the corresponding randomized values from the SellerAdsABTests variable for each test. The abtest-a variable retrieves the random value associated with Test A, abtest-b retrieves the random value for Test B, and abtest-c retrieves the random value for Test C. This setup allows us to simultaneously run and track three different tests within the TrendyolAds platform.

Let’s focus on abtest-b , which represents one of the user-defined values we have created to store the random value for Test B in the A/B testing process. The value of abtest-b is generated based on the SellerAdsABTests variable in the user’s session. To retrieve the specific random value for Test B, we use a JavaScript code snippet in Google Tag Manager. By implementing the variable configuration, we can accurately retrieve the random value for Test B from the SellerAdsABTests variable in the user’s session and assign it to the abtest-b variable. For example, if the SellerAdsABTests variable in the user’s session contains “A_12-B_4-C_8”. In this case, the value of abtest-b is 4.

Furthermore, in addition to retrieving the random value for Test B, we create a custom JavaScript function in Google Tag Manager to determine the value for the AB Test 2 variable. This function calculates the value based on the random value stored in abtest-b. If the random value falls between 1 and 50, it assigns the value A_Daily_Budget_100tl_text . If the random value falls between 51 and 100, it assigns the value B_Daily_Budget_250tl_text. In case the random value is outside these ranges, it assigns the value C-Error. These values, A_Daily_Budget_100tl_text and B_Daily_Budget_250tl_text, represent specific variations or configurations within the A/B testing process.

Once the TestName_RandomValuedata is captured in GTM, we can leverage it in Google Analytics by creating dimensions. These dimensions allow us to segment and analyze user behavior based on their assigned test groups, enabling us to make data-driven decisions. For example, we can filter and view specific event actions or labels in Google Analytics reports based on the AB Test 1and AB Test 2 dimensions, allowing us to compare the performance of different variations.

During the A/B test, it is important to follow a structured approach. Here are the key steps involved:

1. Hypothesis Formulation:

  • Start by formulating a hypothesis based on the desired outcome of the A/B test. For example, in our case, the hypothesis is that modifying the design of the create button will result in an increase in the click-through rate.
  • Clearly define the control group (existing design) and the variant group (new design) to compare their performance.

2. Implementing Variations:

  • Implement the design variations for the control and variant groups following the steps outlined earlier.
  • Ensure that the variations are properly tracked using Google Tag Manager and that the TestName_RandomValue data is captured and stored in custom user-defined variables.
The advert listing page features two different variations of the create new ad button.

3. Running the A/B Test:

  • Deploy the A/B test to the target audience and let it run for a specified duration to collect sufficient data.
  • For mobile applications, track events using Firebase Cloud Messaging to send data directly to Google Analytics. By adding custom dimensions to these events, we can filter and analyze the test results without the need for Google Tag Manager.
  • For web applications, utilize Google Tag Manager to extract and format the data from the session as we demonstrated in the above examples, and then send it to Google Analytics. This ensures that the test data is properly structured and can be easily analyzed using custom dimensions in Google Analytics.

4. Analyzing and Interpreting the Results:

  • Once the A/B test is complete, access the data collected in Google Analytics.
  • Create custom reports in Google Analytics to analyze the performance of the control and variant groups.
Analytics 360 filtering page
  • Utilize the TestName_RandomValue dimension, such as AB Test 1 and AB Test 2to segment the data and compare the metrics between the groups.
  • Measure the key metrics, such as click-through rate or conversion rate, for each group and compare the results.

To evaluate the impact of the A/B test variations, we extracted data from Google Analytics on a daily basis for a period of one month.

While retrieving the data from Google Analytics, we tracked the number of users who accessed our TrendyolAds Panel each day (captured in the Sessions column). Among those users, we recorded the number of individuals who clicked the CreateNewAdbutton (YeniReklamOlusturcolumn). Additionally, we calculated the Ad Create Conversion Rate by dividing the total number of users who clicked the button by the number of users who accessed the panel, providing us with the percentage of users who engaged with the button.

After gathering the Sessions,CreateNewAd and Ad Create Conversion Rate data for both versions, we compared the Ad Create Conversion Rate percentages in theDelta Ad Create CRcolumn. Positive percentages indicate the increase in button usage for the new design compared to the old design within a given day. In order to aid the decision-making process, we calculated a 95% confidence interval for all columns. The result was a confidence interval of 22.04% ± 2.37%. This indicates that by transitioning to the new button design, we can expect an increase in button usage of approximately 22.04% ± 2.37% based on the one-month test period.

It is important to note that the numbers presented in the following Excel sheet are purely hypothetical and do not reflect actual Trendyol data. They are used for illustrative purposes to demonstrate how we interpret A/B test results statistically.

The data we get from Google Analytics for 1 month

Conclusion

A/B testing is a powerful methodology that empowers us to make data-driven decisions and optimize our products with great effectiveness. In the previous phase of our journey and throughout this document, we have extensively explored the process of setting up and implementing an A/B testing infrastructure. Additionally, we have conducted comprehensive analyses of the results obtained from our A/B tests.

In the first part of our story, we explored the fundamentals of A/B testing and recognized its significance in evaluating the effects of design and feature variations. We also delved into the technical aspects of establishing the A/B testing infrastructure, covering topics such as designing the experiment schema and generating randomized experiments. With the help of the ExperimentService class, we were able to manage experiments, assign variations to users, and store experiment data in the session or local storage. By leveraging Google Tag Manager and Google Analytics, we gained valuable insights into user behavior and analyzed the results of our A/B tests.

The process involved measuring key metrics, such as click-through rate and conversion rate, comparing the results between test groups, and calculating confidence intervals to estimate the true effects of our experiments.

In conclusion, A/B testing is a vital tool in our product development process, enabling us to validate hypotheses, drive innovation, and provide an enhanced user experience. By embracing A/B testing, we can optimize our TrendyolAds product and deliver valuable solutions to our sellers, helping them increase their sales and achieve their goals.

That’s a wrap! We appreciate your attention throughout this article. We hope that the information provided has been helpful in understanding the process of setting up and implementing an A/B testing infrastructure. If you have any feedback or questions, we would be delighted to hear from you.

Co-authored with Yasin UYSAL

If you are interested in this world, you can join Trendyol’s rich technology world.

--

--