Tree Shaking vs Code Splitting: A Real-World Benchmark

Gordon Hempton
Outreach Engineering
6 min readSep 20, 2016

As a single page web application becomes larger and larger, it becomes increasingly important to find efficient ways to bundle and execute its code. Two of the most popular ways to do this are to use a modern bundle optimization tool that incorporates tree-shaking, such as Rollup; or to invest in asynchronous code loading, splitting the Javascript codebase into multiple payloads and loading at run-time. Although in principle these two techniques are not mutually exclusive, in practice they are not very compatible.

We prototyped three different variations of asynchronous code loading and evaluation as well as an implementation of Rollup. Each of these optimization prototypes was benchmarked using our primary application (outreach.io), and compared against our current bundling implementation.

Some Background

Before we go into the details, here is a high level overview of our application:

  • Framework: React 15.3 and ReactRouter 2.7.x
  • Dependency Management: JSPM, SystemJS, Bower (legacy)
  • Build Tooling: Broccoli, Babel, and SystemJS Builder(via broccoli-systemjs-builder)
  • Module Stats: 1503 Modules
  • Route Stats: 178 total routes, 122 possibly entry points
  • Overall Size: ~6mb uglified JS, 2.5mb optimized CSS, 1mb uglified legacy Bower dependencies
  • Content Delivery: Cloudfront (w/ HTTP2 and compression)

Approach

Results were accumulated programmatically using Capybara and Selenium. The top 5 entry points in our application were selected (as determined by traffic). Our application was instrumented to collect the value of window.performance.now() once the inner-most route of the specific entry-point was finished rendering (this may include one or more ajax requests). The data was collected using Chrome 53 and Firefox 47 with both a cold and warm browser cache. Each data point was averaged over 4 samples, each using a completely new browser process.

Results

Single Static Bundle (Status Quo)

Prior to this exploration, we took a simplistic approach to application bundling: all of our modules were transpiled into System.register format via Babel and concatenated into a single large bundle. A single module contained our entire React Router configuration and all modules were statically referenced with import statements.

Lower is better. Tested against our 5 most frequent entry points.

On Chrome 53, it took an average of 3639ms to load our application, and 3169ms when cached. Firefox is significantly worse, with 6235ms and 6150ms respectively.

Async Evaluation with a Single Bundle

Although this approach also consists of a single large bundle of similar size, in this case the router layer was refactored to no longer statically reference any components. Instead, only the name of the module was referenced and the module was dynamically imported during the routing process. In React Router 2.X, this looks something like:

...
getComponents(location, callback) {
System.import(this.moduleName).then((m) => callback(null, m.default));
}

The benefit this has over our existing implementation is that the evaluation of our modules is deferred until they are actually needed:

Delaying evaluation resulted in a significant performance gain, changing the load time to 2607ms on Chrome (2524ms when cached) and 4594ms (3486ms cached) on Firefox.

Async Evaluation with Multiple Bundles

Instead of having a single bundle containing all modules, this approach had multiple bundles. Using bundle arithmetic, a shared bundle coming in at 3.4mb (661k compressed) as well as optimized bundles ranging in size from 5kb to several hundred were created:

By using multiple bundles, less code is loaded. This resulted in a slight performance boost: Chrome coming in at 2349ms (2198ms cached), and Firefox at 4517ms (3208ms cached). One difficulty with this approach, however, is that deciding how split up an application into bundles is non-trivial, and care needs to be taken to ensure that bundles do not overlap.

Async Evaluation with Individual Modules

Individual modules, each loaded in a separate request, are the holy grail of SystemJS. Theoretically, with the advent of HTTP2, the overhead of each request becomes negligible, and there is no benefit to bundling multiple modules together. Moreover, in this world, tree shaking is free– as only the modules that are needed will be loaded on demand. Unfortunately, according to SystemJS’s creator, individual modules are not quite ready for primetime because the various browsers’ javascript runtimes haven’t been optimized for this type of execution.

Despite feeling wary, we decided to prototype it anyway. For this approach, we kept the shared 3.4mb bundle above, but left out the per-entry-point bundles. Instead, SystemJS was left to its own devices to load each module separately at run-time. To keep this as optimized as possible, we also wanted each individual module to be pre-compiled and uglified. Neither SystemJS builder nor JSPM make it easy to package individual modules for production, so we did this ourselves using broccoli and babel. We also prebuilt an 80.5kb depcache to allow nested module dependencies to be loaded in parallel:

Interestingly, there was an essentially negligible difference between this implementation and using a single asynchronously evaluated bundle. Chrome came in at 2753ms (2281ms cached) and Firefox at 4237ms (2903ms cached).

Rollup

Rollup is a brilliant piece of technology that targets ES6 modules to make some next-generation optimizations. Specifically, it does some fancy inlining to place modules within the same scope and uses a technique called Tree Shaking to eliminate unused modules entirely. At least one benchmark shows it to deliver massive performance wins over other bundling approaches when the number of modules becomes large.

Conveniently, SystemJS has a tight integration with rollup and utilizing it in a static build is as simple as adding a single rollup: true parameter. In order for rollup to function, however, the router layer must remain static and all modules must be referenced up front (as was the case in the status quo).

Rollup tells us it has inline 1101 modules and the resulting bundle is only 3.3mb, roughly half of 6mb bundle without rollup!

Fascinatingly, in our application’s case, rollup is actually slightly slower in Chrome than the other approaches and hugely more performant in Firefox. This is the first benchmark where Firefox comes in ahead and the cached data point comes in as the most performant overall. Chrome came in at 3429ms (2288ms cached) and Firefox came in at 2735ms (1574ms cached).

Conclusion

Our actual traffic consists of 98% Chrome (part of our product offering is a Chrome extension after all!) so once we average the cached/non-cached data points and weight the browsers against actual traffic we have the following improvement over our original implementation:

In our case, asynchronously loading and evaluating our modules provides us more benefit than creating a highly optimized bundle using Rollup.

As a final parting thought: YMMV. The tradeoffs of these approaches are highly dependent on the application to which they are being applied. In our case, we have a large number of routes and have the luxury of targeting desktop users on evergreen browsers. Smaller applications and those without a large routing layer will probably gain more from a heavily optimized single bundle. That said, horizontal scalability is important. With properly implemented code splitting, additional routes and vendor dependencies in far off corners of the application will have zero impact on other parts of the site. This is not the case with Rollup– however optimized its overall bundle might be.

In an ideal world, it would be possible for Rollup to take multiple entry points (apparently Webpack 2 might already support this) and create multiple bundles. Even if this were possible, it would still require a dubious runtime-level understanding of the application. It is hard to not feel like bundling as a whole is a temporary solution and that individual module loading is the future (HTTP2 push anyone?).

Are you as excited about modern front-end development as we are? Help us write amazing software in React, Rails, and Go in our beautiful Seattle office. Check out our careers page.

--

--