Performance futures — Bundling

Sam Saccone
6 min readOct 29, 2017

From service workers to http2 to link rel preload we have never had more primitives at our fingertips to deliver fast experiences to our users.

But as Eric Bidelman put so well, chrome launches a LOT of features.. In the past four years more than 1000 *new* features have landed in the browser. It is easy to lose track of not only what has already launched but what is coming down the pipeline.

I want to talk about upcoming and future performance patterns that would enable us to deliver fast experiences to our end users regardless of device or connection type.

Let’s establish a common language around performance and the different types of performance buckets.

When talking about the performance of any webpage we can break the client facing performance experience into three pillars:

  • Network bound operations, think of this as getting files and resources to the clients to execute to display to the end user.
  • Parse and execution bound operations, think of this as the time spent in the client evaluating assets to determine side effects, these are things like decoding images and parsing and compiling javascript assets.
  • Finally, render bound operations, this is the browser doing work to take the assets from the parse and compile phase and render them to the clients screen.

Now let’s take a look at optimizing the first phase of performance, The network bound delivery of assets.

When it comes to shipping assets to clients the de facto approach has been to bundle your assets into large monolithic assets that contain all that is required to run your application.

If you have ever seen an app using the bundling approach then you are probably all too familiar with these asset names, an application bundle containing code that the application dev has written, a vendor bundle containing all of the third party dependent files required to run the application and then a style file containing a rollup of all the CSS files for the app.

This approach of bundling has long has been the “blessed” path for shipping a performant web app, so much so that when you search “how to speed up your website”, the bundling approach is always listed at or near that top of the clickbait “10 tricks to speed up your website” articles.

Bundling does minimize the round trips between your server and client making load time faster, but it comes with some non-trivial performance downsides that are either not known or forgotten about… So let’s unpack these downsides a bit.

Assume that two of our javascript bundles share a file: “common.js”, this file is required to be in both bundles to enable them to work correctly, now consider what happens in the situation when we need to update common.js…

Both bundles (app-bundle and vendor-bundle) are now invalidated, this means that the next time that the client requests these assets the client will need to redownload the files instead of being able to reuse the locally cached asset, and thus causing the repeat page load performance to be significantly slower due to a minor code change to a single file.

Imagine for a moment if instead of bundling our application we split out our files in a granular fashion… Some of you may be familiar with this approach as powered by tools like webpack and react router, but I want to push this even further.

What if we started to split CSS and JS files by component instead of just by route?

With this approach of highly granular assets, a change to a single file would no longer cause entire bundles to invalidate, but instead would only cause a single smaller file to be redownloaded by the browser. This single file invalidation would result in the least impact to an end user when it comes to repeat page load performance since the browser will be able to go to the cache for the rest of the assets and only have to fetch a single file.

There are some other non-trivial wins that come along when we start shipping granular assets, to understand one of the largest wins that we get we need to talk about what happens when we download and evaluate a javascript file.

From when the browser requests a javascript file to when it runs the code the asset goes through several discrete phases. First the bytes from the server have to be downloaded in the client, once the asset has been downloaded the js code must be parsed by the javascript engine and then compiled into code that is specific to the platform and architecture of the client. When all of that work is done the code is executed/run.

When we ship the same asset without changing the contents, the browser is able to skip the download phase and reuse the asset since it is present locally on the client in the cache, thus enabling a significant repeat page load performance win.

However we are still paying the parse and compile overhead code of this asset, luckily there is a path to minimize this cost in chrome today.

When the same asset is parsed and compiled multiple times in the browser, chrome recognizes that it is repeating work, and is able to take the previous parse and compile work and apply it instead of redoing the work. This reapplication of a previous parse and compile results in on average a 40% reduction in parse and compile time, again resulting in a faster experience for your users. The “trick” in this case is to ensure that you are not invalidating your assets rapidly because the moment a file changes the browser can no longer kick into this advanced load optimization.

Where does this leave us then when it comes to shipping bundles or granular assets? Well I would implore you to look at strategies to ship more granular bundles so that you can retain better cache integrity even in a code base that is changing so your end users can have a fast repeat page load experience.

This post is an excerpt from my talk at chrome dev summit 2017

--

--