Optimize SPA bundle size to speed up application loading

Mikhail Sakhniuk
Miro Engineering
Published in
13 min readJan 20, 2022

Hi there, I’m Mike Sakhniuk, and I’ve been a front-end developer for over six years. Currently, I’m a front-end developer at Miro.

This article focuses on single page application optimization. We’re going to explore these areas:

  • How to optimize a web application, and speed up its loading.
  • Why it matters.
  • Which tools come in handy to optimize, measure, and check the results.
  • The benefits of using loadable modules in applications.

Reducing an application loading time is a complex task that is addressed as a team effort. Let’s explore what front-end developers can do by omitting the options for network optimization using HTTP2, or optimizing the server response speed.

Problem

Before acting, it’s important to understand the problem we’re trying to solve: why do we need to optimize something, if the application loads and works just fine?

The download speed of an app has a big impact on user adoption. For example, let’s have a look at analytics from KissMetrics; they’re not recent, but they’re meaningful in our context:

  • Every two extra seconds an app takes to load increases bounce rates by 103%. If you or your company attract customers by paying for each visit to the resource, by speeding up the loading by just 2 seconds you can save on your advertising budget, increase sales, and profits.
  • Each second spent downloading the app reduces the conversion to purchase by 7%. For example, if your company makes $100,000 a day, losing one second will cost you $2.5 million a year. An amount that could equal the payroll of all developers in the company. Just one second.

To understand where those loading seconds are hiding, let’s see how the browser loads the SPA. If we have a typical monolithic SPA application, loading it would look like this:

  1. Retrieve and parse HTML to build the DOM. The browser receives an HTML document with no content, except for links to additional resources such as JS bundle and styles. What the user gets right now is a white screen.
  2. Load external resources. At this stage, the JS and CSS files are loaded.
  3. Parse CSS and build the CSSOM. The resulting styles are parsed, but the user still gets a white screen.
  4. Execute the JavaScript code.
  5. Render the page. Only at this stage does the user see the result of loading the application. The content of the application is displayed.

Schematically, loading the application looks like this:

Typically, loading JS and CSS files can be synchronous or asynchronous. In a typical SPA situation, all loading steps are blocking, since the user can’t start working with the application until all necessary content is loaded in the browser. In the image above, this is marked with the TTI (Time To Interactive) label: this is the point in time when the user can start working with the application.

The FCP (First Contentful Paint) label marks the moment when the user can view content instead of a white screen. The FCP moment can occur either during JS file loading, or at the very end, near the TTI moment.

The application loading time is proportional to the number and the size of the downloaded files, when the application is loaded. Now that we know how much a second costs, let’s make sure that these files weigh as little as possible, and that there are as few as possible.

Optimizing web pages and loading them

Before diving deeper into the optimization of the application bundle, let’s take a look at general approaches to speed up web page loading.

Removing unused code

A fairly common problem is code that will never be executed. This code shouldn’t end up in production. Here are some examples:

  • Moki. A few years ago, while working on a project, I reduced the size of the application bundle by 80% by deleting a JSON file with the 5000 employee company structure. This file temporarily replaced several API requests while the backend was in development, and someone had forgotten to remove the import.
  • Old modules. When developing new functionality, we often build a number of prototypes and write entire modules, which can easily slip into production.
  • Styles. Libraries like Bootstrap or Tailwind CSS pull in hundreds of unnecessary classes. There are tools to remove unused styles, such as PurgeCSS.
  • Libraries. Besides styles, there are also cases when a huge library is connected to a project for the sake of one function or component. In such cases, it’s worth taking only the imported piece of code from the entire library using tree-shaking.

Compressing the code

JS and CSS code that we write is collected into files by the collector. They can be further compressed by abbreviating variable names, removing spaces and comments.

In most cases, the compiled applications will already be compressed, since all major frameworks compress the production build by default. If you use custom build configurations, make sure your code is succinct. This alone can reduce the size of the application by 50–60%.

Compressing images

I’m sure many of you have come across pages on the Internet that took an incredibly long time to load. Pictures on these pages could take up tens of megabytes, since they were published in their original format.

There are tools that enable compressing images tens or even hundreds of times without noticeable quality loss. Static images that you connect directly to the project can be processed by WebPack. If you download pictures from a server, it’s enough to set the task to the backend developer, who can solve it in a couple of hours.

Compressing fonts

Sometimes, font size can be as much as 500 KB.

The main approach to reducing the size of downloaded fonts is to choose the right web font format, namely the latest format with maximum compression that all browsers support, such as WOFF or WOFF2.

If you need to support older browser versions, you can allow browsers to choose the optimal font format for them, and load only the selected format:

Another approach to reduce font size is by removing unused glyphs from the font file. This is suitable when the font is used for headings or in a logo.

You can remove glyphs in dedicated apps, and if you use Google Fonts in your project, you are very lucky: in the link to the font, you can pass the text parameter with the value in the form of characters that you want to use in the font. As a result, the server serves the font file with only the selected glyphs. The final size of such a font can be ten times smaller than the original one.

Usage example:

<link href=”https://fonts.googleapis.com/css?family=Roboto&text=Miro” rel=”stylesheet” >

Tackling bundle size

Code splitting enables chunking the application, and discarding code that is not required when starting the SPA.

Code splitting is the cornerstone of application size optimization. This process divides the main application module into parts or chunks that can be loaded when needed. Moreover, this can also happen after the application is loaded.

Route splitting

The simplest and most popular SPA optimization approach is to paginate the application. The bundler creates a separate chunk for each page of the application. The chunks are then loaded only when the user navigates to those pages.

The diagram above shows how the application is divided into chunks with application pages, the main module used to initialize the framework, modules, data, and the router.

When the application opens in the browser, the user loads only one page. So during startup, only two files are downloaded: the main one, and a chunk. Since an application can easily consist of dozens of pages, this approach only can significantly reduce the application launch time.

One-time code

After the SPA is chunked into pages, you can move down one level and see how the root part and the pages inside are arranged. The first thing to look for is one-time code.

By one-time code, I mean parts of the application that the user sees only once. For example:

  • Registration and authentication forms, if they’re not available on a separate page.
  • Onboarding and training new users.
  • Tips and information blocks.

Rarely used code

This is code that builds parts of the application that the user is likely to use only from time to time. We can move this code to chunks that aren’t needed when loading the SPA. Examples of this code are notification blocks and help information.

Google Docs Help

On the right in the image above, you can see a screenshot of a browser developer tools with a list of loading JS files. Opening the help (on the left in the image) triggered downloading a dozen or so files instead of only one. This is an unwanted side effect of excessive optimization. For example, if you move a block to a separate chunk, and then you move a dozen more components in the same block to their own chunks, this can cause files and content to load recursively. As a consequence, the component takes even longer than usual to load.

It’s good practice to avoid chunking everything just to lighten the size of pages or the root module, as it can backfire. For a noticeable performance increase, it’s worth starting with blocks and modules that weigh at least 100 KB.

Hidden blocks

Besides paginating an application, you can also split content in pages using collapsed boxes or tabbed panels. You can attribute these blocks to the internal routing of pages, so they are easily suitable for optimization.

At Miro, we use this approach a lot, as it greatly speeds up the first page load. Knowing how often users open a particular tab, we can safely put out the blocks that users visit more rarely.

The screenshot above shows a section in the Miro settings page. The content of the page is organized in tabs, which open one at a time. As a result, additional JS code is loaded for each tab opening.

Besides optimization, splitting SPA into chunks brings another significant advantage: caching. After the browser downloads the application files, it saves them to disk for later use. If the user wants to reopen the page or its component, then the chunk will no longer be loaded from the server, but from memory, instead. This allows you to instantly open pages and work with the SPA. Most importantly, when you update your application, the modules that you didn’t touch aren’t updated, and the browser doesn’t need to load unchanged pages and components of the new version of the application.

Dialog boxes

We place content in dialog boxes when it’s important for the user to access it from any page in the application. Most often, such components are in the root module of the application, which means that they’re loaded when the SPA is first launched.

These dialog boxes are hardly ever necessary when the application is loading, so they also qualify for moving to their own chunks.

In large projects, dialog boxes can take up a significant portion of the application. For example, the Billing module in Miro weighs over 350 KB; a separate development team works on it.

Working with loadable modules offers a number of advantages:

  • Source code structure: by moving components to modules, you can improve the organization of your source code, and the structure of the files in your project.
  • Isolation of work on components: it’s easier for different teams to work on the project, especially if you move the modules to separate repositories.
  • A / B testing: dynamic component loading on demand enables replacing a module with another under different conditions. This gives businesses ample opportunity to test hypotheses, and to improve product quality.
  • Easy migration of an old module to a new one, with the possibility of smooth rolling without affecting the application itself as a whole. The user won’t realize that a new modal window has opened for them, while the old module is shown to other users.

Localization

The optimization principle is similar: it’s important to ship only the language that the user will use. Other languages can stay on the server.

The technical side of the solution

So far we’ve explored the product side of solving the problem; we learned what to optimize and in which order. Now let’s see how to do it.

import(“./module”)

A crucial point is dynamic import. It looks like a function, but it isn’t. We can’t pass it as an argument and do other things, besides calling it. The call returns a promise, and after the promise is fulfilled, we get the module that we imported:

import(“./foo”).then(foo => console.log(foo.default));const module = await import(“./foo”)

In the example about localization, dynamic import enables loading the localization files from the server in JSON format without involving backend developers. The files remain in the application source code for the ease of development and debugging:

const language = getUserLanguage();import(“./locale/${language}.json”).then(locale => {/* … */})

The example above shows the main power of this syntax. The import accepts a string as input, so we can dynamically change it, with a small caveat: we can’t pass the constant to the function, since the collector doesn’t understand which path to take and which files to chunk. In the example, the code is valid; this use of string substitution enables to immediately remove files in batches from the directory.

In the React world, we can also easily dynamically import entire components and pages. There’s a special lazy method for this:

const App = () => (
<Suspence fallback={<div>Loading…</div>} >
<Component />
</Suspence>
)

Using Suspence, we can show another component in progress, while the needed component is being loaded. This gives us the ability to consistently handle the download process. At the same time, the user understands that the necessary block of the application is about to be displayed.

This is an example of the most popular paginated optimization approach:

const Home = React.lazy(() => import(“./Home”))
const Profile = React.lazy(() => import(“./Profile”))
const App = () => (
<Router>
<Suspence fallback={<div>Loading…</div>} >
<Switch>
<Route exact path=”/” component={Home} />
<Route path=”/profile” component={Profile} />
</Switch>
</Suspence>
</Router>
)

React.lazy has a flaw that prevents such a component from being used in server-side rendering. To do this, you can use an alternative, such as Loadable Components.

In the meantime, React introduced Server Components or zero-bundled components. These are components that aren’t included in the assembly; they’re executed on the server, and the result of the execution is sent to the client as a Virtual DOM chunk. While Server Components are waiting for release, we can try them in a test build of React, or in the latest version of Next.js.

Next.js

While Next.js is often referred to as a server-side rendering tool, I highly recommend trying it out for SPA development, as it packs almost all the major load speed optimizations out of the box; namely:

SSG

Static Site Generation allows each page of the application to be rendered during the build process. As a result, when the page is first loaded, the browser receives HTML populated with content, which the browser instantly renders. Therefore, it doesn’t matter how much JS code is loaded; the user sees the page immediately. This gives the feeling that the application is loading very quickly. And with an optimized JS bundle, you get a perfect result.

Routing with prefetch

Next.js comes with a built-in router that is implemented as files. This gives a cleaner project structure, and an automatic division of the project into chunk pages.

The main feature of this router is that prefetch is preconfigured. It enables loading page chunks before the user lands on those pages. When links to internal resources enter the viewport, or when the mouse cursor hovers over them, they automatically start downloading, and then they open for the user instantly.

Image, font, and script optimization

Built-in optimizations for images, fonts, and plug-in scripts speed up page loading. You don’t need to configure anything; just use the provided API.

Business logic and state management

If we have a large application, then it most likely has a central state with various handlers; for example: Redux and Redux Saga.

We register the central state at the topmost level of the application. It’s important not to forget about it, and separate the business logic along with the detachable SPA components.

There are many code splitting solutions for Redux. If you use Mobx in your application, you’re in luck, as it is separable out of the box; just take care of the correct state architecture.

How to implement code splitting in other libraries and frameworks? An obvious life hack is to find the code-splitting section of the documentation. The main goal of any framework (Angular, Vue, React, Svelte, and so on) is to speed up and optimize the loading of applications; check their documentation to learn how to do it correctly.

The approaches described above are suitable also to any other stack.

Measuring and controlling app size

Let’s close the article with a brief overview of tools that can help measure the size of the bundle and control the optimization process.

Lighthouse

The main utility for analyzing your application, Lighthouse is built into the Chrome browser; it displays key app scores on a 100-point scale. It includes tips and recommendations to improve the scores.

Lighthouse also has a CLI version that can be integrated into CI / CD to measure the performance of each new build automatically.

Webpack Bundle Analyzer

The Webpack plugin analyzes the final assembly of the application, and it displays the result as a page containing all the application chunks and components.

It helps identify large modules, and it provides information on what to optimize first.

Source-map-explorer

The last tool is source-map-explorer.

This tool helps understand what components your chunks are made of. This tool gives information about the size of the application from a slightly different angle: it parses the map file, and it shows the result as a project structure.

Join our team!

Would you like to be an Engineer at Miro? Check out our opportunities to join the Engineering team.

--

--

Mikhail Sakhniuk
Miro Engineering

Frontend Software Engineer (TypeScript | React) at Miro