Optimising Upsells: Harness the potential of Multivariate Testing

Ljemmerson
Sage Design
Published in
6 min readFeb 15, 2024
Illustration of a man chopping a log.

At Sage, we employ a diverse range of research methods to guide our final design deliverables. Within our web and mobile products, one of our most frequently used research approaches is qualitative. In this method, our team of researchers conducts usability tests on high-fidelity prototypes. In contrast, our marketing site teams heavily rely on quantitative A/B testing to determine the optimal direction for specific pages or elements that yield the highest conversions.

This quantitative approach is particularly valuable when speed is of the essence and when we need to test smaller components within a broader user journey. For instance, imagine you want to explore whether the arrangement of tiles on a page influences a user’s decision to purchase a product. This might prove challenging to assess through traditional usability testing, but it becomes possible when utilising A/B testing.

When we conduct A/B testing, our typical approach involves targeting a small percentage of our user base, typically around 4%. Half of these users are exposed to design A, while the other half encounters design B. The design that demonstrates a 95% confidence rate or higher in conversion is the one that ultimately goes live for the remaining user base. This method enables us to make informed design decisions based on concrete data.

Example of how A/B testing works.

However, you may find yourself in a situation where you have more than just two options to test. Take, for instance, your content strategy — your content designer might have a multitude of ideas they want to experiment with. When you’re dealing with multiple variants like this, it’s known as multivariate testing.

Much like A/B testing, in multivariate testing, you analyse and compare various design or content elements to determine which combination yields the most statistically significant conversions. This typically involves testing multiple variables simultaneously. The winning design, which achieves a confidence level of 95% or higher, is the one you’ll ultimately choose to roll out to your entire user base.

Multivariate testing allows you to fine-tune your approach by considering a broader range of possibilities and ultimately adopting the most effective one to enhance the user experience.

Example of how multivariate testing works.

I had the opportunity to undertake a multivariate testing project for our Accounting app, particularly when we integrated Stripe payments into our invoice processing workflows. In the past, our users had to receive invoice payments through bank transfer, cheques, or over-the-phone transactions. Unfortunately, this often resulted in our users spending valuable time chasing payments that should have been settled promptly. This diverted their attention from more critical aspects of growing their businesses.

Our journey to seamlessly integrate Stripe payments into our platform was meticulously designed and usability tested. Through extensive testing and comprehensive market research, we gathered valuable insights and feedback that we then actioned. This left us confident that the introduction of this new feature would provide users with substantial value and position us ahead of our competitors in terms of offerings.

Showing the information architecture for the Stripe Integration.

Nevertheless, we encountered a significant hurdle shortly after launching our Minimum Viable Product (MVP). We observed that the conversion rate for users signing up for this new feature was low. Recognising the need for improvement, we shifted our attention towards devising an in-app upselling strategy.

We saw this as an ideal opportunity to implement multivariate testing to optimise our approach. In our initial tests, we decided to introduce the upsell on the invoice screen. We made this decision based on our analytics data, which indicated high traffic to this particular area. Take a look below to see a wireframe of the screen that we focused on for this experiment:

Wireframe of the screen showing where the upsell would go.

Following brainstorming sessions with our content designer, we realised that there was a multitude of individual features we could potentially highlight in the upsell content. However, we found ourselves at a crossroads, unsure of which particular feature or combination of features would yield the highest conversion rates. To tackle this uncertainty, we decided to begin our multivariate testing journey by focusing on the content itself.

When conducting a multivariate test, it’s crucial to fine-tune small elements or variables with each iteration. This approach enables us to pinpoint precisely which variable wields the most significant influence on our conversion rates.

Here, you’ll find the different copy variations we subjected to testing, with the winning version highlighted.

The copy that we tested via multivariate testing on our first round.

Below, you’ll find the actual figures illustrating how our upsell impacted conversions. We used an online multivariate testing calculator for this analysis (there are numerous options available). It’s worth noting that we incorporated a control group that didn’t receive the upsell, which served as a baseline for comparison. This helped us establish with certainty that the upsells were indeed influencing the overall conversion rates.

As mentioned earlier, let’s delve into confidence levels. In this data, you can observe that Treatment 4 achieved a confidence level of 100%, signifying a high level of statistical confidence. This outcome provided us with the assurance needed to confidently proceed with this particular variant.

A multivariate calculator showing real data from inside the accounting app.

With the successful copy in hand, our next step was to explore the visual presentation of the upsel. Here, you can see the different variations we tested. We followed the same testing methodology as described earlier but narrowed our focus to four groups (including the control group) instead of five. As before, we’ve highlighted the winning test below.

The visual treatments that we tested in round 2 of our tests.

Lastly, we implemented the winning test for 100% of our user base and monitored the ongoing conversion rates.

Of course, this straightforward upsell is just the beginning of our journey. We’re committed to continually monitoring conversion rates and conducting further tests to optimise user engagement with this new feature. For instance, we might explore how this upsell integrates with our dashboard onboarding tile, or we could delve into the impact of email-based upsells on increasing conversions.

In our multivariate testing endeavors, we relied on two key tools: Pendo and a multivariate result calculator. Pendo empowered us to craft upsells that could be seamlessly integrated alongside various page elements, allowing us to target specific percentages of our user base. This enabled us to closely track both Stripe integration conversion rates and upsell views simultaneously.

As for the nifty multivariate calculator, it’s quite straightforward to use. You simply input the number of upsell views and the number of successful conversions, and it identifies the most statistically significant test.

Choosing the right research tool ultimately depends on your specific needs. In our case, a combination of multivariate testing and usability testing proved ideal, given our need to incrementally test numerous marketing ideas and navigate a complex user flow. However, for other scenarios, A/B testing might suffice, or conducting usability testing alone might align perfectly with your requirements.

--

--