Speeding up large React single page applications

Mark Beeson
Fender Engineering
Published in
7 min readDec 17, 2019
A lighthouse at Point Vicente, Rancho Palos Verdes California
Photo by Jikun Li on Unsplash

The React framework, since its launch in 2013, has matured into an incredible front-end development environment. More than just another Javascript framework, React has shifted the conversation about what it means to be a web application developer in the 21st century. Gone are the days of waiting for large html streams to be sent and digested by our browsers, and in their place are fast, mobile-friendly web applications which are indistinguishable from native apps.

The one thing that React has been especially incredible at, almost without compare, is sending huge chunks of Javascript down to your browser. Even the default application created by create-react-app sends more than 40k of gzipped Javascript for an app that’s just a smidge more advanced than hello world.

Alright, maybe that’s not such a great thing. Here’s how we can make it better.

The background of Fender Play

Here at Fender, our Fender Play application is written in React — plain old vanilla React for the web and mobile browsers, and React Native for our iOS and Android apps. While we have a fairly complex application, we feel like Fender Play is a pretty “normal” React web application. It loads data, presents different views, doesn’t have any out of the ordinary components, everything you might expect from a React app that has a medium-spicy amount of complexity. We use server side rendering, but from a development perspective that aspect of our deployment is transparent to engineers. Fender Play speaks to Lambda on the server-side, maintains its state in Redux, and behaves in pretty much the way you’d picture a modern web single page application.

Over the past few years of developing Fender Play, we noticed that the overall Javascript bundle size was rapidly growing out of control. We ran Lighthouse reports on our marketing site and noticed that our scores were, to put it lightly, not the best. Our First Meaningful Paint (a score given by Lighthouse as to when a good chunk of your page has been painted) was up to — wait for it — keep waiting — wait a little more — not yet — wait for it — 16 seconds on an emulated mobile browser over a 3G connection.

Ouch.

Coming up with a plan

So we set about unwinding some of this bandwidth bloat — realizing that one single solution wasn’t going to solve this overnight, we brainstormed a collection of changes. These changes got bucketed into small, medium, and large sizes and we started setting aside time each sprint to work on them. The plan was to continuously improve site performance, and measure the impact of our changes along the way with Lighthouse and Google Analytics user timing. We would, alongside our feature development, set aside some points every sprint to work on our site performance tickets.

Mercedes mechanics working on their Formula 1 car
Photo by Kevin Langlais on Unsplash

Iterative speed improvements

One of the first pieces we landed was to only load certain third party Javascript libraries when we needed them. The implementation docs for these kinds of libraries almost always say “just drop this script tag at the bottom of your html and you’re all set” and for a React app, it’s easy to just drop that into the single html file — unfortunately that causes a large chunk of Javascript to load and block rendering when you first come to the site. We integrated a script loader only within the React routes that needed those scripts, and saw a speed bump very quickly.

Fender hosts our own custom fonts — one of the things that we noticed was that our fonts contained glyphs for languages that we currently don’t support. Eliminating those extra glyphs caused our font load times to be cut in half. Again, this isn’t a huge win, but making this change cut the load and render time for our clients by a few hundred milliseconds and was something we were able to do easily and in parallel with our other development efforts.

Trading finer-grained analytics for user experience

We use Segment to track events and route browsing data into our data warehouse. Segment tends to write out a collection of other script data, and what we saw was that our analytics package was loading and executing faster than our application code, so that First Meaningful Paint would get blocked waiting for Segment-injected scripts to finish parsing and running.

This wasn’t ideal — we would rather that our users have a better browsing experience, instead of knowing in detail that people were waiting around for seconds before just hitting their back button. In conjunction with our data and analytics teams, we made this case for only loading our Segment code after our code had a chance to run.

Taking the script loader that we had already developed, we would inject Segment (and its subsequent scripts) after our initial application code ran. We verified in our QA environment that no data was being lost, and when we pushed this change to production, even though we didn’t reduce the overall bandwidth load at all, there was a visible difference in perceived loading and painting times.

IntersectionObserver inside below-the-fold components

Like a lot of websites these days, Fender makes use of high-dpi images so that our site looks great on modern displays. This has the side effect that our images tend to be almost 4 times the bandwidth of a comparable regular-dpi image. Rather than taking the easy way out and exasperating our design department by cutting out high-dpi images, we decided on a better solution. For select React components, we created a React hook called useIntersectionObserver. It takes in two parameters, options and target. Options contains properties the IntersectionObserver expects, and target is the element to be observed.

const MyComponent = () => {
const target = useRef();
const options = {
root: null,
rootMargin: '0px',
threshold: 0.1,
};
const [hasIntersected] = useIntersectionObserver(options, target);
useEffect(
() => {
if (hasIntersected) {
// load something
}
},
[hasIntersected],
);
return <div ref={target} />;
};

Now, when elements are outside of the viewport, their images won’t load. When they get within 10% visibility, we’ll fire off an event to load any images, so that by the time they scroll completely within view, they’re loaded.

Watching the Fender Play page lazy load images
Lazy loading in action

This simple change — sorry, IE11 users, but you really should upgrade to Windows 10 — made a huge difference. Our initial page load dropped by a large percentage, and we saw our First Meaningful Paint get much, much faster. We’re convinced that this change can be helpful for any component, not just image-heavy ones, and we’re brainstorming on a way to open source a library to be used with any React component, so that the component lifecycle gets short circuited and runs only when the component scrolls close to the viewport. We think that will be a huge impact for the React community as a whole.

Code-splitting and lazy loading our Javascript bundle

Then the final iterative piece of the puzzle was to start code-splitting, tree-shaking, and lazy-loading our Javascript. There was a fair bit of pre-work needed here; making sure we were on the latest version of React, Webpack, and Babel. Migrating our routing to React Router. There was some time spent making sure our lazy loading was working with server side rendering. This is one of the places where React tends to be overly difficult to work with; without knowing the specific magical incantations, your Javascript bundle won’t get split and loaded when needed. At Fender, we believe that this should be the default, out-of-the-box experience for React.

As it stands, the build/deploy phase needs tight controls within the vanilla React experience. Frameworks such as Next make this easier, but for junior or midlevel engineers, it is a daunting task to get lazy loading working well. Fender has an incredible engineering team and it still took us a couple of sprints of pre-work, iterating, and validating that our routes all worked successfully. Thankfully, all of this work didn’t block our other feature development and we landed our lazy loading change, which caused another huge performance gain.

A storefront with a “Free Delivery” sign
Photo by Drew Beamer on Unsplash

Our results so far

Sprinkled in with the rest of our sprint work, the speed improvements landed fairly continuously over the course of about 8 sprints. We’re very excited about the results for our users, and we’re very excited about more speed progress to be made. So far, we reduced the overall bandwidth load of the Fender Play homepage by over 2MB, and the First Meaningful Paint dropped from over 16 seconds to — drumroll please — 4 seconds for a 3G connection. That’s 12 seconds of time shaved off with not a lot of effort.

You can see where we’re heading by visiting the Fender Songs homepage and seeing how quickly that page loads. By making even more functional changes, we believe that our users will have the fastest possible browsing experience on Fender Play. We also believe that with certain opinions set for the default React experience, more sites and single page applications can benefit from large speed optimizations.

Thanks to David Arias, Senior Engineer for contributing technical details on this post.

--

--

Mark Beeson
Fender Engineering

Music. Movies. Microcode. High-speed pizza delivery.