The Growth Machine: How Noom Runs 365 Landing Page Experiments Per Year

Patrick Lee
Published in
12 min readAug 19, 2020


Hello! My name is Patrick, and I’m a Senior Software Engineer on the Growth team. Previously, Keith introduced Noom’s Growth Machine; I’ll dive a little deeper into the technology behind how we run experiments at Noom, culminating with Project Meristem.

One of Noom’s core principles is “Optimize for fast learning.” We do this by running experiments; we run a lot of them across the entire company. Successful experiments help us improve the performance and quality of our product. Failed experiments are valuable opportunities to learn, and we gain as much, if not more, from these lessons. The build-measure-learn cycle is deeply embedded into the culture at Noom, and we’ve discovered it’s an immensely powerful tool when deployed at scale.

The engineers on the Growth team have been charged with supporting an extremely fast build-measure-learn cycle with the ultimate goal of running as many experiments on our website as possible. This means that we’d like every user of our website to be part of at least one experiment to help improve the overall product. To put things into perspective, for example, the traffic to our site allows us to run one experiment with a significant sample size per day. We have, on average, a 1-in-10 experiment success rate, so we can expect to see a winning experiment once every 10 days. Over the course of a year, we can expect to see around 36 successful experiments, each one with some positive effect on our key metrics. Compare that to a website that runs no experiments and sees no measurable impact. The increases generated over time can be a major driver of growth for the company, and depending on the experiments, can have huge impacts across the entire organization. Running at least one experiment on each user is a lofty goal and requires constant improvement to our processes, team, and technology. Since this is an engineering blog, we’ll go into some of the lessons learned and subsequent iterations to our web stack.

If you’re interested in speeding your experimentation cadence up even more, my colleague Paul wrote about speeding up A/B tests with shared control groups. I’d highly recommend reading his post.

Noom’s website started as a custom WordPress theme. WordPress, paired with the Optimizely plugin, created a powerful combination of a page editor and an easy way to run simple experiments. The main benefit was a low barrier to entry for a budding Growth team. It also allowed a Product Manager to run experiments without an engineer. However, once our experiments required more custom features, then engineering help was necessary. The options were to build a custom WordPress theme and plugins to support the necessary features or build a new site using a flexible framework like Django. We chose Django because we believed it would enable us to build the features we needed easily and to have full control over the environment our experiments ran in. Doing the same in WordPress was also possible but required a deep knowledge of the platform, and we didn’t have that expertise in-house.

We bootstrapped a Django application, put it up on Heroku, and started sending traffic to it. Keeping things simple, the site was built with Django’s native Jinja templates, jQuery, and LESS. To run the experiments, we used Optimizely if the changes were mostly client-side, or custom Python to support experiments in our backend code. Our custom sandbox worked well for us, allowing Product Managers to build client-facing experiments in Optimizely, and engineers to build experimental features quickly and easily. This worked well for a while, but it wasn’t without its downsides. It was common for dead code to be left in the system because new features were prioritized over cleaning up failed experiments. Dead code was in the form of copies of templates that were no longer used or feature flags that, when set, would show an experimental feature. As the team grew, it became apparent we needed a better way to manage our architecture.

We decided to migrate to a single page application built with React. We believed we would reduce the impact of dead code by heavily componentizing all our features. The theory was that if we componentized our features well enough, the majority of our experiments could just be one component against another. To build this, an engineer would copy a component, modify the copy, and then use Optimizely to set a window global variable which triggered our experimental feature. Optimizely’s JavaScript loaded before our React script, ensuring our feature flags were injected into the global scope before React initialized. We saved ourselves a lot of work by offloading the experiment administration and traffic allocation to Optimizely. The componentization of our code made adding and removing features easier. In conjunction with the switch to React, we made a more concerted effort to delete code after it was no longer necessary. Overall, the changes were an improvement to our experimentation workflow. Still, as the team grew and we were tasked with building experiments at a faster pace, we noticed problems cropping up.

Our use of feature flags to enable experiments required control logic to manage what variation to show. It wasn’t uncommon to see nested control logic enabling or disabling different experiments stacked on top of each other. In addition, even with more mindful efforts to remove experiments that have ended, code was still forgotten and left in the codebase. Another major issue we faced was that sometimes experiments would bleed through into our baseline website and cause bugs. A bug in the control logic or some improperly scoped CSS was enough to break our baseline site. To run even more experiments, we needed a more robust system.

An example of how our control logic could sometimes look.🤦

Shoot for the Moon

It started with a simple idea: “What if creating a new experiment was as simple as creating a new branch in Git?” We wanted a way to create a new Git branch in our repository, make our changes, and then run an experiment where the control would serve one Git branch, and the variation would be a different Git branch. Once we finished the experiment, we could delete the branch if the experiment failed, or we could create a pull request and merge in our changes if it succeeded.

Easy cleanup

What a powerful idea. After a great deal of research, we were unable to find suitable information to aid us in our quest. The idea was straightforward, so we decided to build it ourselves.

The functional requirements:

  1. Changes we make for an experiment should never affect users seeing our baseline site or different experiments.
  2. It should be dead simple to throw away failed experiments and incorporate successful experiments.
  3. Control and experimental code should be split up, so they aren’t bundled together in our JavaScript. It didn’t make sense for users to download experiments they weren’t going to see.
  4. Creating new experiments should be as simple as checking out a new branch from master.
  5. The new system needs to be compatible with our existing React application. We didn’t have time to rebuild the frontend.

Iteration 1: Have React, Will travel

The first iteration of our build script was very simple. We wrote a shell script to iterate through several Git branches in our client repository, running webpack on all the assets for each branch. Essentially, this generated multiple independent single page applications. When a user requested the site, we sent back a common HTML file with a loader script attached. The loader knew the locations and names of each of our JavaScript bundles and injected a script tag into the page pointing to whichever bundle we wanted to load. With this basic setup, we were able to inject different variations of the same site. Using conditional logic in the loader, we could then show different users different variations of the site.

Iteration 1 of the build process

Although it worked, there were major issues with this approach. First, the loader would slow the initial render of the site down by quite a bit. The page required the loader’s script to download, parse, and run before the actual main bundle was injected into the page. For slower connections, perceived load time would be much longer. We ran an A/B test against a single page application, and the difference was staggering. Second, the JavaScript bundles were quite large. Each bundle contained the code for the entire single page application, including vendor assets. We were unable to get code splitting working with our custom build process because we ran webpack for each branch separately; webpack couldn’t aggregate and extract common code. This build process was getting in our way, and we weren’t able to use a lot of the features webpack had to offer. To reap all the benefits of webpack, we needed to let it manage all of our assets. There had to be a better way to feed webpack the code from all of our Git branches and conditionally load the desired branch. Enter React.lazy and dynamic imports.

Iteration 2: Dynamic Laziness

There is a neat feature built into webpack’s implementation of the dynamic import statement, which allows you to pass a dynamic expression into the dynamic import statement. This means that you’re able to pass a dynamic string (i.e., template string) into an import function. The string represents a path to a file relative to the application’s entry point (index.js). This file is packaged as a “chunk” (a JavaScript file) during the build and is downloaded at runtime when it is needed. The magic happens when webpack builds the JavaScript bundles. Normally, when it encounters a dynamic import with a non-dynamic path, it generates a new chunk for that specific file. When webpack encounters a dynamic import with a dynamic path, it analyzes the string and generates chunks for every file that could potentially fit that dynamic path. Then during runtime, based on the dynamic path requested, it downloads the chunk requested.

For example, if we have a component, then create two variations of the baseline component, we have three versions of the same component. We can use dynamic imports with dynamic paths to load a random variation in our application when a user requests the page.

Example of a dynamic import with dynamic expression. When webpack runs, it packages the three components into three different chunks and downloads the appropriate one when requested. webpack documentation

The linchpin of our experimentation system is the ability to load different variations of the same website. Dynamic imports with dynamic paths take care of this, allowing us to load different modules at runtime. To build the rest of our experimentation platform, we needed two additional components. The first is React.lazy to dynamically load React components and a build script that takes variations from different Git branches and puts it all together.


Where dynamic imports allow us to lazy load JavaScript modules into our application, React.lazy allows us to lazy load React components into our React application. By combining dynamic imports and React.lazy(), we get two benefits, code-splitting and dynamic imports.

Example of how React.lazy is used to import a component React docs.

If we allowed lazy loading components anywhere within our component tree, we would have lazy loading statements all over the place. It wouldn’t be much of an improvement over feature flags in terms of readability. However, if we restrict lazy loading to our routes, our logic would be defined with the routes and in a single location in our application. There would be a short delay while navigating between pages while the browser downloads the requested route, but users are accustomed to pages loading. In our application, we combine React-Router with React.lazy; we can now dynamically display a control or variation when a user accesses a route. A basic version of our main component could now look like this:

In this example, we have a button that randomly loads a variation of the base route. Every time you click the button, a random variation is loaded. Our application could be set up with some business logic to determine what variation to load. Now when a user navigates to the base route, they only download the variation they need. With the loading mechanism in place, the final piece of the puzzle is how we get variations from different Git branches together in the same place, so webpack can run and bundle the source code from our baseline and variation branches.

The Production Build Script

The original idea was to utilize Git to store variations of a common baseline site. React.lazy and dynamic imports provide a way to load different variations conditionally, but we needed a way of getting those variations into the same place as our baseline. With a deliberately constructed folder structure and a shell script, we simply iterate over the desired Git branches and copy the desired variations over to our baseline before running webpack.

Our sample project’s directory structure

Our shell script at a high level would clone our React application’s repository to a temporary folder. It would then check out our master branch and copy all the source files over to another directory. In our temporary folder, we then check out our first variation. Instead of copying over the entire source directory over, we only copy src/baseline, renaming it to variation1. Repeat the same steps for the rest of the variations. Finally, we go into the build folder, npm install, and run webpack on the whole shebang. All our baseline files and variations would be bundled up into a static site, which we could then take and upload to our web server.

Example build script. Gist

Putting it all together, we now have a system that meets the requirements we set out to fulfill.

  1. Code from the variations can’t bleed into our baseline site. They are stored in separate branches and only put together during the build. We have very little code shared between them.
  2. It is extremely simple to create a new variation, just create a new Git branch.
  3. To merge a successful change, create a PR, and merge the change in.
  4. A secondary benefit of React.lazy is that we get code splitting for free. The client code is split into multiple chunks and fetched when needed, reducing the bandwidth necessary to load the page.
  5. We only needed to make minor changes to our existing React application to get it working with the new system.

Iteration 2 of our build process

Of course, no solution is a panacea; there are several downsides that we need to work around, such as:

  • There is shared code between the baseline and variations (like App.jsx and the HTML template). We very rarely run experiments on these main components, but if we do need to modify them for any reason, we need to be very careful not to introduce breaking changes.
  • Variation branches are usually forked off of a commit on our baseline branch. If new commits are added to the baseline, we need to rebase our variation branches on top of the baseline — especially if the change is a critical bug fix.
  • Users are required to download many smaller bundles, increasing the number of network requests. On the flip side, the browser is now able to download our JavaScript bundles concurrently, reducing our initial render time.

Noom is investing heavily into our experimentation platform because the more experiments we’re able to run, the faster our build-measure-learn cycle becomes. The faster we learn, the faster we can improve our product. We’re working hard to achieve Noom’s goal of helping as many people as possible live a healthier life. This infrastructure is just the next step, a building block towards a cohesive, powerful experimentation machine. We’re continuing to build out our processes, tools, and infrastructure and learning at an ever-increasing speed.

Are you interested in helping to build the next generation of high-frequency experimentation tools? Check out our careers page for more information.