Boosting Conversion Rates and User Experience: A Deep Dive into A/B Testing for React Web and React Native

Yunus Emre Tat
Trendyol Tech
Published in
10 min readJun 26, 2023

--

A/B testing, also known as split testing or bucket testing, is a powerful method used by developers and product teams to compare and evaluate different variations of a feature, design, or user experience. By randomly assigning users to different variants and measuring their performance against predefined goals, A/B testing enables data-driven decision-making and optimization of products.

In this article, we will embark on an exciting journey of implementing a custom A/B testing infrastructure in web and mobile applications, all while harnessing the power of Google Tag Manager and Google Analytics. By integrating these tools into our A/B testing process, we can leverage their robust tracking and analytics capabilities to gain deeper insights into user behavior and measure the impact of our variations.

What can you expect from this article?

We will guide you step-by-step, providing clear instructions and code snippets, to help you build your own A/B testing framework from scratch. Along the way, you’ll gain a deeper understanding of A/B testing principles and have the ability to tailor the implementation to your specific project requirements.

Here’s a glimpse of what we’ll cover:

  1. Understanding A/B testing: We’ll start by exploring the fundamentals of A/B testing, including its purpose and benefits. Understanding these foundations will set the stage for building an effective A/B testing infrastructure.
  2. Setting up the A/B testing infrastructure: We’ll dive into the key components of setting up the infrastructure. This includes designing the experiment schema, generating randomized experiments, and tracking and collecting relevant data. We’ll also introduce the power of Google Tag Manager and Google Analytics in simplifying the implementation and providing valuable insights.
  3. Implementing A/B testing in React Web and React Native: We’ll focus on the practical aspects of implementing A/B testing in both React Web applications and React Native mobile applications. This will involve integrating the custom experiment engine and conducting A/B tests.

By the end of this article, you will have gained a comprehensive understanding of A/B testing and how to implement your own custom infrastructure in applications. You’ll also be able to leverage the capabilities of Google Tag Manager and Google Analytics to continuously optimize your user experiences.

So, join us on this exciting journey as we build our A/B testing infrastructure, all while harnessing the power of Google Tag Manager and Google Analytics. Let’s dive in and unlock the potential of data-driven decision-making!

1. Understanding the Benefits of A/B Testing

As the TrendyolAds team, our goal is to develop products that help sellers increase their sales through effective advertising. A crucial aspect of our product development process is conducting A/B tests, which allow us to gather valuable data and insights to refine our platform and enhance user experience. By leveraging the power of A/B testing, we can make evidence-based decisions that result in improved sales and overall success for sellers’ businesses.

In our domain, we have implemented A/B testing in various scenarios to optimize our platform. Here are some examples:

  1. Button Placement and Shape: Through A/B testing, we experiment with different placements and shapes for buttons to improve user experience and encourage click-throughs.
The advert listing page features two different variations of the create new ad button in mobile client

2. Informational Messages: During the ad creation process, we test different informational messages to guide sellers and provide them with relevant instructions and tips.

3. Action Buttons for Ad Status Updates: On the ad listing screen, we experiment with different action buttons that sellers use to update the status of their ads, aiming to enhance usability and efficiency.

Two different variations of the action buttons

These examples demonstrate how A/B testing allows us to iteratively test and refine various elements of our product, ultimately maximizing its impact. By analyzing the data collected from different user groups exposed to different variants, we gain insights into which variations perform better in terms of user engagement, conversion rates, and overall sales.

Now, let’s explore the steps involved in setting up and implementing an A/B testing infrastructure in web and mobile applications. By understanding the underlying principles and applying them to our specific domain, we can establish a solid foundation for effective A/B testing and optimize our advertising product.

2. Building Your Custom A/B Testing Infrastructure

2.1 Designing the Experiment Schema: Key Considerations

The first step in setting up our A/B testing infrastructure is to design the experiment schema, which defines the structure of our A/B tests. In the context of our TrendyolAds product, we want to randomly divide users into control and variant groups. To accomplish this, we have implemented a strategy that involves storing test assignments and random values in the user’s session for web users and local storage for mobile users. The format we use is TestName_RandomValue.For example, A_4represents Test A with a random value of 4.

In the user’s cookie for the web or in the local storage for mobile, we create a key called SellerAdsABTestsa value like A_41-B_9-C_38-D_92 .This value indicates that the random values assigned to the tests are generated within the range of 0 to 100.

By assigning random values to users for different tests, we can later use these values in the logic to take appropriate actions. For example, let’s consider a scenario where we have three variations of a Create button for Test B. We segment users based on their assigned random values into three groups: Variation 1, Variation 2, and Variation 3. Each group corresponds to a specific range of random values: 0–33 for Variation 1, 33–66 for Variation 2, and 66–100 for Variation 3.

By incorporating this approach into our A/B testing infrastructure, we can effectively assign random values to users and use those values to determine which variations of features or designs they will experience. This allows us to gather data and analyze the performance of different variants to optimize our TrendyolAds product.

Example of keeping our value in session on web

2.2 Generating Randomized Experiments: The Core of A/B Testing

Now it’s time to create the A/B testing infrastructure. Let’s explain how our module will work.

Our module’s code will retrieve the test values from the configuration file. For experiments with runningstatus, it will check if the corresponding testName_randomValue exists in the cookie or local storage. If it doesn’t exist, the code will generate the testName_randomValue and store it accordingly.

For experiments with a paused status, the code will skip assigning them to users if they have not been previously stored in the session/local storage. This ensures that the variations associated with these tests are not applied to users unless they have already been assigned and stored in the session/local storage.

Furthermore, for experiments that have been deleted from config, the code will remove the associated testName_randomValue from the cookie (for web) or local storage (for mobile), ensuring that data related to deleted experiments is cleaned up.

By implementing this code, we can effectively manage the execution and storage of A/B tests in our TrendyolAds product. It ensures that users are assigned to different variants based on their testName_randomValue, allows for pausing and resuming of tests, and handles the removal of deleted experiments.

Let’s dive into the code and explore its functionalities without further ado.

the folder structure of our experiment module

2.2.1 Creating the Config File

To set up the experiments, we’ll create a config.ts file where we define the experiments using the name and status format.

In this file, we specify two experiments, A and B. The experiment A is set to a running status, indicating that it should be actively executed. On the other hand, the experiment B is set to a paused status, indicating that it should not be executed at the moment.

By separating the experiments’ configurations into a dedicated config file, we can easily manage and modify them as needed. This approach provides flexibility in controlling the status of each experiment without modifying the code directly.

2.2.2 Creating Constant File

In the experiment-service.constants.ts file, we define several constants that are essential for our A/B testing experiments.

Let’s break down the purpose of each constant:

  • EXPERIMENT_COOKIE_NAME: Represents the name of the cookie or local storage key, which in this case is ‘SellerAdsABTests’.
  • EXPERIMENT_COOKIE_DELIMITER: Represents the delimiter used to separate the different experiments in the cookie or local storage, which is ‘-’ in this example.
  • EXPERIMENT_DELIMITER: Represents the delimiter used to separate the experiment name and random value, which is ‘_’ in this example.

Additionally, we define the EXPERIMENTS constant, which represents the different experiments we have. In this example, we have an experiment named ADVERT_CREATE_BUTTONwith the name A. The condition specified for this experiment is that if the score is bigger than 50, a specific action or version should be taken.

By centralizing these constants in a dedicated file, we ensure consistency and easy modification of experiment-related values throughout the codebase.

2.2.3 Creating Enums and Types File

In the experiment-service.enums.ts file, let’s define an enum to represent the statuses of our experiments:

This enum, ExperimentStatuses, defines the possible statuses of our experiments, including running and paused.

Additionally, let’s create another file for types, where we define types related to our experiments. In this case, we’ll define the ExperimentConfig type, which represents the experiment configuration stored in our config file. It includes the experiment name as a string and the status as one of the values from the ExperimentStatuses enum. We’ll also define the ExperimentCookie type, representing the decoded value of the EXPERIMENT_COOKIE_NAME key in the session or local storage. It includes the experiment name as a string and the score as a number. Lastly, we define the ExperimentCookieMap type, which represents a mapping between experiment names (as keys) and their corresponding cookies (as values), utilizing the ExperimentCookie type.

By utilizing these enums and types, we ensure type safety and provide clear structures for representing experiment statuses, experiment configurations, decoded experiment cookies, and experiment cookie maps in our A/B testing infrastructure.

2.2.4 Creating Utils File

In the experiment-service.utils.ts file, let’s implement two functions: encodeExperimentCookie and decodeExperimentCookie.

encodeExperimentCookie: This function takes an ExperimentCookieMap as input and returns a string representation of the encoded cookie value. It iterates over the ExperimentCookieMap, combines the experiment name and score using an underscore delimiter, and joins them with a hyphen delimiter to create the final encoded cookie value.

For example, if we have the following experiments:

decodeExperimentCookie: This function takes the encoded cookie value as input and decodes it into an ExperimentCookieMap. It splits the encoded value using a hyphen delimiter, iterates over the split values, further splits each value into the experiment name and score using an underscore delimiter, and checks if they exist and are valid. If so, it adds them to a Map object.

These functions provide the necessary functionality to encode and decode the experiment cookie values in the session or local storage. They ensure that the experiment information is stored and retrieved accurately, enabling us to use the score values for logic and decision-making in our A/B testing infrastructure.

2.2.5 Creating ExperimentService

The ExperimentService class is responsible for managing experiments, including initializing experiments, storing and retrieving the experiment cookie, deleting old experiments, and assigning new experiments based on the configuration.

The ExperimentService class consists of the following methods:

  1. setExperimentsCookie(experiments: ExperimentCookieMap): This method encodes the experimentsCookieMap into a string and stores it in the session or local storage.
  2. deleteOldExperiments(): This method removes experiments from the experimentsCookieMap that are no longer present in the configuration.
  3. assignExperiments(): This method handles the assignment of experiments based on the configuration.
  4. getRandomExperimentScore(): number: This method generates a random score for experiments.
  5. getExperiment(name: string): Promise<ExperimentCookie>: This method retrieves the experiment cookie for a given experiment name.
  6. initialize(): This method initializes the experiments. It retrieves the encoded cookie value, decodes it into the experimentsCookieMap field, and then proceeds to delete old experiments and assign new ones based on the configuration.

The initializeExperimentService() function creates a singleton instance of the ExperimentService class, initializes the experiments, and returns the initialized experimentService instance.

By utilizing the ExperimentService class, we can effectively manage the execution, storage, and retrieval of A/B tests in our TrendyolAds product. It provides the necessary functionalities to initialize experiments, store and retrieve experiment cookies, delete old experiments, and assign new experiments based on the configuration.

3. Implementing A/B Testing in React Web and React Native Mobile

3.1. Conducting A/B Tests

In this code snippet, we first use the getExperiment function from the ExperimentService class to retrieve the experiment cookie value associated with the 'ADVERT_CREATE_BUTTON' experiment name from the session or local storage. This cookie value represents the random value assigned to the user for this experiment.

Then, we evaluate the isAdvertCreateButtonExperiment variable based on certain conditions:

  1. !isCypressTest(): This condition checks if the code is not running in a Cypress test environment.
  2. advertCreateButtonExperimentCookie: This condition checks if the advertCreateButtonExperimentCookie exists (i.e., if the user has a cookie value for the 'ADVERT_CREATE_BUTTON' experiment).
  3. advertCreateButtonExperimentCookie > EXPERIMENTS.ADVERT_LIST_DROPDOWN_ACTIONS.CONDITION.SCORE_BIGGER_THAN: This condition compares the cookie value with the threshold defined in the EXPERIMENTS constant. If the cookie value is greater than the threshold, the condition evaluates to true.

Based on these conditions, the ternary operator ? : is used to render either the <NewCreateButton/> component or the <CreateButton/> component. If the isAdvertCreateButtonExperiment condition evaluates to true, the <NewCreateButton/> the component is rendered. Otherwise, the <CreateButton/> component is rendered.

4. Conclusion

Throughout this document, we have covered the essential aspects of A/B testing and its implementation in our TrendyolAds platform. By conducting A/B tests and analyzing the results, we have gained valuable insights into user behavior, improved our product’s user experience, and ultimately enhanced the success of our sellers’ businesses. Now, as we move forward to the next section, let’s delve deeper into the analysis and interpretation of the A/B test results. By making data-driven decisions based on these results, we can further optimize our TrendyolAds product, ensuring that our sellers’ advertising efforts yield maximum effectiveness and profitability.

If you’re interested in diving deeper into the analysis and interpretation of A/B test results, we invite you to explore our second article. It delves into the practical aspects of analyzing A/B test data and making data-driven decisions to further optimize our TrendyolAds product.

That’s a wrap! We appreciate your attention throughout this article. We hope that the information provided has been helpful in understanding the process of setting up and implementing an A/B testing infrastructure. If you have any feedback or questions, we would be delighted to hear from you.

Co-authored with Yasin UYSAL

If you are interested in this world, you can join Trendyol’s rich technology world.

--

--