RapideX-Experiments in Rapido at a scale

What’s an A/B experiment and its significance:

Vikash Singh
3 min readJun 8, 2023

A/B experiments, also known as A/B testing or split testing, are common technique used in software development to compare two or more variations of a product or feature. In an A/B experiment, users are randomly divided into groups, and each group is exposed to a different version of the product or feature. By comparing the outcomes and user behaviors of each group, developers can evaluate which variation performs better and make data-driven decisions about the product’s design, functionality, or user experience.

The importance of A/B experiments in software development lies in their ability to provide objective insights into the effectiveness and impact of changes. Here are some key reasons why A/B experiments are valuable:

  1. Data-driven decision-making: A/B experiments help eliminate subjective opinions and rely on concrete data to make decisions.
  2. Iterative improvement: A/B experiments facilitate a continuous improvement process.
  3. Risk reduction: A/B experiments provide a controlled environment for testing new ideas or features before rolling them out to a wider audience.
  4. Personalization and customization: A/B experiments can help tailor experiences to different user segments.
  5. Cost-effectiveness: A/B experiments allow developers to test and validate hypotheses with relatively low cost and effort compared to fully implementing and deploying a new feature.

Overall, A/B experiments provide a systematic approach to improving software products by using empirical evidence. They enable developers to make informed decisions, reduce risks, and deliver better user experiences.

How we model experiments into Rapido:

We model our experiment platform into the following unit:

Experimental Lever/Design variation:

These are different experimental levers or variations we want to test in our A/B experiment. These can be variations of features, designs, pricing models, etc.
Modify our codebase or system to expose the experimental levers to our users or target audiences.

Experiment/Hypothesis:

An experiment refers to the process of comparing two or more variations of a product or feature to determine which performs better. Each variation represents a different hypothesis that developers aim to test and evaluate based on user data.
We model variations into kinds of rules like belonging to an experimental lever, a target population, a split users strategy, day and time condition, data collection, and configs as part of the variation.

Execution/Experiment run:

In an A/B experiment, a “run” refers to the execution or implementation of the experiment with the specified variations and test conditions. Here are the key steps involved in running an A/B experiment:

  1. Design variations: The experiment starts with designing the different variations of the product or feature that will be tested.
  2. Randomization and assignment: Users participating in the experiment are randomly divided into groups, with each group assigned to a specific variation.
  3. Implementation: The variations are implemented in the software or system, either through code changes, configuration updates, or any other necessary means.
  4. Data collection: During the run of the experiment, data is collected on user interactions, behaviors, and desired metrics.

We have modeled design variation, randomization and assignment, implementation, and data collection in the creation of experiments.
However, we have introduced geo-temporal factors, overridden time-condition, and populated configs at the execution layer.

Execution lifecycle:

schedule -> run -> pause/resume -> terminated

Geo variation modelling:

We modeled geo variation as a route from a source to a destination

RapideX experiment playbook:

Here we showcase a simple experiment using RapideX.

Experiment lever: Banner shows on the App screen and we can identify this as visitor-webpage-banner. On the app screen, we would integrate to find configured experiment:

bannerLever = RapideX.getExperiment("visitor-webpage-banner")
bannerText = bannerLevel
.payload({"user_id", "123"})
.apply({"source":"bangalore", "destination":"bangalore"})
.config("banner")

Experiment: A curl request to configure an experiment with a combination of the following rules:

  • Days: Monday, Tuesday, and Wednesday
  • Dates: 1st February 2023 to 1st March 2023
  • Time: 8.00 AM to 7.00 PM
  • Population Condition: Access Device should Mobile
  • Locations(Test): Bangalore
{
"name": "greeter_experiment",
"actor": "visitor",
"domain": "webpage",
"feature": "banner",
"time_condition": "$weekday in ['Monday', 'Tuesday', 'Wednesday'] and $hour in 8..19",
"population_condition": "$payload.device_type == 'Mobile'",
"split_strategy": "EvenAttributeAsTest",
"split_attribute": "$payload.user_id",
"test_control": true,
"config": {
"params": [
{
"name": "banner",
"type": "string",
"struct": null
}
],
"schema": {
"id": "user_id",
"device_type": "device_type"
}
}
}

Execution: A curl request to schedule execution on Bangalore from 1st May to 15th May

{
"name": "greeter_execution",
"locations": [
{
{
"source":"bangalore",
"destination": "bangalore"
}
"type": "city"
}
],
"config": {
"params": [
{
"name": "banner",
"type": "string",
"value": "Welcome User"
}
]
},
"start_date": "2023-05-01",
"end_date": "2023-05-15"
}

--

--