A/B Testing with Google Optimize and Redux

Collin Bourdage
Neighborhoods.com Engineering
5 min readNov 2, 2018

Early on when we began building out a way to support A/B experimentation within our React/Redux applications we knew we wanted the flexibility to easily change service providers. We quickly realized that there were not a lot of examples or documentation around how to best implement A/B experimentation on the various services for single page applications. This led us to figuring out how best to implement A/B experiments and feature gates into our applications in the least intrusive way following our engineering values.

From day one we knew we needed to approach this problem by abstracting the switch/gate controls away from our service logic. Again, we needed a way to change our service logic without impacting our experimentation gate controls. This lead us to the following result:

<ServiceAExperiment id="ButtonCta">
<Variation>
<Button variant="primary">Learn More</Button>
</Variation>
<Variation name="VariationB">
<Button variant="secondary">Learn More</Button>
</Variation>
</ServiceAExperiment>

The creation of the ServiceAExperiment component implemented a standard Experiment component under the hood and all the underlying Experiment component did was pass along or facilitate the necessary switch based on props provided enable a feature.

Abstracting Away Service Knowledge

This worked well for us for a good while, however, when we went to replace ServiceA for ServiceB we realized we now needed to switch out our implementations and train engineers to now implement all new experiments as ServiceBExperiment. This wasn’t too difficult, but we felt that the service did not need to be known by any implementing engineer. This led us to removing that from the picture and making this:

<Experiment id="ButtonCta">
<Variation>
<Button variant="primary">Learn More</Button>
</Variation>
<Variation name="VariationB">
<Button variant="secondary">Learn More</Button>
</Variation>
</Experiment>

At first the added component layers seemed unnecessary and overkill, but what it did provide was:

  • A standardized interface for implementing experiments
  • Masked unnecessary logic from an engineer during implementation

This allowed us to replace every part of the service layer without having to touch a single experiment’s implementation and allowed us to ignore the differences between ServiceA and ServiceB at the implementation level.

Constantly Refactoring

Of course nothing goes perfectly the first (and sometimes the second) time through. Though we cleaned up the interface a lot, our approach relied heavily on placing most of the provider specific logic at the component level or within specific actions. This seemed fairly sane at the time and never posed any obvious challenges. However, as experiments became more complex and focused we identified the following downsides:

  • A more complicated test relies on very specific user actions or a combination of actions
  • A service needs more context about a users behavior to target that user

Of course, we could compose highly specific actions for each experiment and retain our store data for each, but that did not seem entirely scalable. For instance, if we’re running three or four tests at a time, these actions become unmanageable because they:

  • Add bloat to our software
  • Create a larger change set implement an experiment
  • Increase complexity of our application

At the time of this problem we were also beginning to track a lot more analytics data and associating these to our tests to measure more than just a few main goals of our application. This made us rethink our current approach and figure out how to answer the question of how we can better handle complex experimentation targeting rules as well as data tracking more broadly for these highly specific actions.

Actions As First Class Citizens

This led us to focus our approach around application actions. As a product team we had already identified that we needed to rethink what actions we had in our application and how can we better respond to user actions as a whole. If a user clicks on a button CTA we wanted to track this from an analytics perspective but also maybe trigger data fetching or experimentation targeting. This led us to introducing a more general action:

emitApplicationEvent(eventName: string, payload: ?Object = null)

This opened the door to now implementing analytics and experimentation as middleware within the application. Additionally, this allowed us to start building complex targeting rules for users and assigning based on scroll depth or specific CTAs more easily within the application.

Redux Middleware with Google Analytics + Optimize

This felt like an organic transition as we were already starting to leverage more middleware within our application and turned out to be a great opportunity for us to simply create service specific middleware for whichever service we were working with to facilitate experiment assigning.

Google Optimize

This worked out really well for us for a bit, and as we decided recently to shift to Google’s Optimize A/B testing platform, we now had a way to prove to ourselves this change was beneficial. For anyone that has implemented Google’s Optimize product into a single page application, you may have stumbled into the same problem we did: there is much to be desired in terms of the documentation.

Once we understood how Optimize relayed the activation of a test we begin to see how beneficial the change in our actions and favoring middleware was. Integrating with Google’s Optimize basically just requires one additional property when implemented with Google Tag Manager:

eventCallback: () => {}

Knowing that piece of information opened the doors for us to make a very minor change to our Google Tag Manager middleware and remove our ServiceB middleware and we were ready to start experimenting again.

Our new middleware looked very similar to the following:

trackEvent(eventName, {
...eventOptions,
eventCallback: () => {
next({ type: `${eventName}/callback`, meta, payload });
}
});

The implementation of our experiments remained almost exactly the same from ServiceB to Google’s Optimize which proved to be beneficial in composing new experiments and running follow-ups to existing experiments.

Conclusion

We’ve gone through quite a few iterations and services at this point, and the one thing that has remained consistent is our approach and the value put on ease of change. As noted throughout this article, we’ve seen a lot of refactoring around experimentation, and where we’ve netted out today looks a lot different than where we started 2 years ago. This consistent rethinking makes me really interested to see where we are in the next 2 years.

TL;DR

  • Interfaces are your friend, define them, use them!
  • User actions may impact your application in many ways, so make the whole application aware of what’s happening.
  • Middleware is a great place for handling side-effects within Redux.

--

--