Optimizing loading time for big React apps
TL;DR; -> but you should
Optimizing the load time is a really important, because higher loading times are associated with high bounce rates and poor conversion rates. A big bottleneck in the loading time are usually our bundle size and the loading of external resources.
In this, my first article on Medium, I will tell you the process I followed to optimize my react bundles and make them ready for production, reducing bundle size and loading times. The site https://suitup-ui.org was used as study case, and results are shown using pingdom and PageSpeed Insights from Google.
Load just what you need
A big problem with named imports in ES6 is that you are importing the whole library and then destructuring it to access the module you need. For small libs it’s ok, you will not notice any change, but for big dependencies, like lodash or ramda, I encourage you to just import what you need.
this is bad:
this is better
Minify your code
You can use any tool to do this, but I prefer uglify. It can reduce your bundle sizes a lot. For example, a 6 mb bundle can be reduced to 2 mb easily.
Here is an example of the Uglify plugin configuration in webpack 2, it’s in fact the configuration I use:
Important note: Now in webpack 4, the Uglify plugin is not included. The only thing you need to do is to set the mode in production.
Babel loose modules
This is optional, and is not always recommended, but using the es2015 babel preset with the loose mode set to true, can reduce your bundle in some kilobytes.
The loose mode will generate es5 simpler code, but that means that it will possibly not follow the es6 specification strictly. If you switch from babel to native es6, then you can have problems (but it’s not very likely in almost all project).
To use the loose mode in the es2015 preset, you can do this from webpack config:
Target only the platforms you need
With the new babel-preset-env you can target different platforms and save some kilobytes. Also, this works with the babel polyfills so, only needed polyfills will be loaded. Configuration is something like this:
You can set the
useBuiltInsoption to load only the polyfills you need. Experimental features are not supported, so you will have to polyfill them manually. Check the preset documentation to learn more.
Tree shaking and dead code elimination
If you can use a tool to remove dead or unused code, then use it. In most cases, you will not notice a big difference (at least you are importing the entire libraries and not using them like in the first tip), but everything matters for the final bundle size.
In webpack 2 we can activate tree shaking by disabling the module transformation in the es2015 preset and letting webpack take care of it. In production mode, webpack will search for the unused imports and remove them.
You should think in splitting your bundle in chunks if your bundle is bigger than 250kb compressed. To do so, you can use dynamic imports.
Dynamic imports are “function like” imports. A promise is returned when the module is loaded. If you use webpack, it will recognize this syntax and will separate the module in a different chunk. Check the documentation here.
What can be loaded into chunks?
Good candidates to be separated into chunks are static json files like translations, styles, and assets in general. Also, you can separate your routes into chunks too if your site is too big.
Example of dynamic import (taken from the webpack docs):
Following this idea you can make any component to load as a chunk. First define a dynamicComponent HOC:
And then use it everywhere you need it like this:
Tools to determine what to split in chunks
I use source-map-explorer, with this tool, you will be able to see your dependencies size and impact in your application. Big dependencies may be worth to be splitted in chunks. Here is an example of an app I'm working on:
Deliver your bundles compressed with Gzip
This is very important, and will reduce your bundle size to a 20%-25% the original minified/uglifyed size. Browsers support gzip compression since a lot of time, you don’t need to worry about compatibilities.
You can use 2 approaches to deliver compressed files. The first one is to compress “in the flight” the static assets. The second one is to pre-compress the files. We will see how to do this on webpack and express, but you can accomplish the same with nginx, apache, etc.
Pre-compressing with webpack
I recommend this approach, but can be a little harder to setup.
1- Create compressed bundle with compression webpack plugin
2- Serve your compressed file, you can use a middleware like this:
Compressing in the fly with express
This approach is easier but also the server will require more CPU to compress the files.
Cache your static files
I recommend you to use the browser cache to avoid forcing downloading the same file several times. The recommended expire time for static files is 2 weeks. You must choose at least a strong and a week caching header.
Strong caching headers
The options available to set the max cache time are Expires and Cache Control. The second one is older, was introduced in HTTP/1.1 and Google recommends it over Expires, but you can choose any.
Week caching headers
The options available are Last-Modified and Etag. Last-modified is more user friendly while Etag is ussualy a hash.
For more information about cache headers, you can read this excellent article from the Heroku people.
Here is an example to setup this on Express, a strong (maxAge) and a weak (last-Modified) caching header:
Versioning your files
Cache is great, but what happen if I want my users to always get the latest version of my app? Then you can start versioning your bundle and request it with a version parameter, so every time the url changes, the browser will request the new version, which is not cached.
For example, if your app is at v2.0.2, the url should look something like this:
Avoid loading external resources
Loading external resources will cause bad scores on page speed testing tools. Try always to have everything local or a CDN. Avoid things like loading fonts from google fonts api. This sometimes causes rendering blocking.
Avoid loading moment locales
Moment.js is a very popular library, but it is very heavy and loads a lot of locales by default. You can save a lot loading just what you need. You can use the Webpack Ignore plugin to do so. This solution is used in create-react-app, if you check the webpack configuration, you will see something like this:
Then in your code, you can load the locales you need like this:
Async loading of your app bundle is good if you want to reduce the loading time, because browsers wait for your scripts to be downloaded before rendering the html, but with async, the html will be interpreted while your script is being downloaded.
If you put all your app in a single bundle, then there is no danger to load your scripts like this in the html:
<script async src=”/bundle/main.js”></script>
Test your app first, sometimes weird things can happen when loading your scripts asynchronously.
Server side rendering
Fortunately, React works on the server too, so you can do some server side rendering. I’m not going to do a full tutorial about this in this article, but you can investigate by yourself.
Some tips before you start doing this, is to avoid using facebook flux, since it’s harder to make server side rendering work with it, redux is a better choice. Also, you should check some frameworks like next.js that make it simpler to implement. Other options for apps where you can't implement it is to use some tool like Prerender.
Other details you can improve
Some less important improvements are to use a CDN like cloudflare, prevent redirects (like http to https or www to non-www) and reduce the image size loading by using lazy loading (I provide an Image component in Suitup UI to do this).
Time to put it in practice: Testing tools online to rank your website speed
There are some very good tools online to analyze and rank your website speed. The most known is PageSpeed Insights from Google. Another very good is the one from Pingdom. You can use both, you should get similar results. Don’t worry if you don’t get a perfect 100, it’s not always possible to reach that. For example, if you use google analytics, you will never get a 100% in PageSpeed Insights, because the analytics script max-age header is only of 2 hours at the time this article was written. Not even the PageSpeed Insights website reach a 100% for this reason:
After I made all this changes to my website (suitup-ui), I got really nice results. Just server side rendering and redirections are still missing, but it’s on the way, and then I will get a 98% in Google pageSpeed insights. Current results are:
Google PageSpeed Insights
Performance grade: A 96
faster than: 98% of websites
I hope you learnt something reading this article, I also want to learn new things, so if you have any other tips, please write a comment!
Did you find a mistake? something missing?