How we make product increments using A/B Tests at QuintoAndar

Matheus Luna
Blog Técnico QuintoAndar
6 min readAug 22, 2023

Here we will write about the experience we had when applying an A/B test to one of the most accessed products by QuintoAndar users. The idea was to redesign the product, improve the user experience, and incorporate more brand elements in the product.

What is an A/B test?

An A/B test is an experimentation technique where two versions of a product are randomly presented to users to determine which one performs better in relation to a specific business objective. The two versions are compared during the testing period through the analysis of metrics such as click-through rates on links or buttons, conversion rates, etc. A/B testing is widely used in product development to make data-driven decisions and improve the performance of tested elements. There are several reasons to conduct an A/B test before implementing a new feature. A/B testing allows you to have concrete data on how users interact with different versions of your product, enabling the team to make decisions about which version works better in relation to established goals. This technique is important to ensure the quality of a new feature, avoiding issues and reducing new product launches risks.

Managing the A/B test in listing 5A

Before developing the new product, we conducted an A/B test with an initial version that aimed to rearrange the sections, focusing more on the truly relevant information given the context and the user’s journey stage. We ran this test for a while until we were confident that our objectives were being achieved, and only then we did start developing the new page.

Choosing the test population

We needed to reach out a substantial population for the test to compare whether the UI (User Interface) change would actually improve the experience and consequently the business metrics. Therefore, we decided to divide our user base, with half accessing version A and the other half accessing version B.

Control vs. Variant versions

At this point, we had two possibilities: either launching the new product after 6 months of development or splitting it into incremental versions that would deliver value to the user, validate design assumptions, and allow the team to gain technical context of the application by gradually and incrementally rolling out the changes. We chose the second option, dividing the new page into 3 versions, each with specific metrics, indicators, and goals. By dividing the release into versions, we could monitor how users adapted to the product and ensure that UI changes would not have a negative impact on our business. The strategy we adopted for this progressive rollout was to maintain two versions under analysis: the control version and the variant version. The control version was our page in its current state, while the variant version was our new UI proposal, which was constantly incremented.

Coordinating the experiment alongside other experiments

In addition to our experiment, other teams conducted experiments over the course of 6 months that could potentially interfere with ours. To manage this, we implemented a few strategies: all changes proposed by other teams had to be aligned with the product owner, design, and engineering leads of our team. We defined metrics that should be considered in the experiment analysis. From the engineering side, we used GitHub’s codeowners to be notified whenever someone tried to make modifications involving the components, API calls, and business rules of the page.

Engineering development

As a development and validation strategy for new features without interfering with the production environment, we made extensive use of feature flags. We can enable/disable a feature through an environment variable or flag management tools used in the application, allowing continuous integration and deployment of the code. In this scenario, we used this strategy to continuously deploy modifications, testing them in the development environment and internally validating the solution. Once validated, we only needed to activate the flag, enabling the new functionality (in this case, a new UI) for our users. To determine which UI (new UI vs. old UI) the user should access, we used a variable stored in the cookies of our users. We have an internal tool that evenly distributes users among the variables of an experiment and modifies the cookie, enabling us to identify which variant should be displayed for a given user. This user-specific cookie value is also used for analyzing our conversion data and user behavior in each version.

Success criteria

During the development of the new product, we used Amplitude for data collection and analysis. Within our context, there were business metrics that indicated whether our modifications were aligning with our objectives or not. Whenever we noticed improvements or regressions, we analyzed the context to determine if such behavior was expected.

For each version, we created new user events to analyze in detail what we were introducing. In addition to fixed metrics comparing one version to another, we also individually analyzed our variant version, and these metrics were extremely important in guiding us and determining if we were moving in the right direction. At times, it was necessary to keep a version live for longer than planned to ensure this.

Process

The following process exemplifies how we managed everything at each stage of the A/B test.

Product Versions

Version 1: In the first version of the product, in addition to the rearranged content, we chose to move some blocks of the page into accordions. This reduced the amount of information on the screen, ensuring that the user would only see what they were truly interested in. An important detail of this version is that by reducing the page content, we increased user interaction with the carousel of similar properties. So if the user was not interested in the property they were viewing, they could find other properties through this carousel and potentially schedule a visit.

Version 2: In the second version of the product, we decided to focus on the First Fold section of the page. In this section, our goal was to emphasize key property information. We revamped the media block, creating a new photo carousel along with a gallery for users to explore every detail of the property. By using more brand colors in the first fold of the page, we improved the product’s identity, making it more representative of QuintoAndar.

First Fold — Before:

First Fold — After:

Version 3 of the new listing: In the third version of the listing, we redesigned some blocks in the property details to align them better with the new QuintoAndar brand.

Main property details — Before:

Main property details — After:

Final considerations

We made a significant change to one of the company’s most important product, and conducting this experiment provided us with the opportunity to work on the development of this new UI without taking significant risks. Thanks to the studies and metric monitoring we conducted, we are confident that even with substantial changes, users continue to have a good experience, allowing us to evolve our product further.

Written by: Sabryna Moura and Matheus Luna.

--

--