A sense of speed

Dario Gieselaar
5 min readApr 26, 2016

Improving (perception of) performance in a single-page application

When we set out to transition from a server- to a client-side application, one of the goals was to decrease page loading time, and improve on perception of performance for user actions, without the help of faster API calls due to lack of capacity. What follows is a (not entirely complete) list of (micro-)optimizations we used to achieve that goal.

I. Use ServiceWorker to prefetch & cache assets

The snack which is displayed when a new version is available

We use ServiceWorker to prefetch & cache all precompiled assets (JS, CSS and HTML, via sw-precache, a nifty little tool which you can easily integrate in your build process (we use Webpack). This means a user only downloads those assets when the build has changed, decreasing startup time (especially on poor connections) and allowing offline usage. When the assets are updated, we present a snack to the user when they’re fully loaded, allowing them to reload the page with the new version.

II. Store API requests in LocalStorage

When the client fetches data from an API endpoint, we cache it in LocalStorage. When this data is needed again, the cached data is served, and a new request is started. Once that request completes, both the cache and the data on-screen update in a nonintrusive way. More on that below:

III. Update optimistically and nonintrusively

Every time the user changes data, we store the changes (as JSON) in LocalStorage, and then submit it to the API at a later point. This enables offline support (kind of), and facilitates and encourages optimistic updates. The basic idea is that you define an action, which tells the application a) where to submit the data, b) how to transform the data from the endpoint to the expected state after the request is completed. Once you have that, we can make assumptions about saved data, and update the interface immediately, allowing the user to continue with their workflow without delay. I wrote a little more about that approach here too:

IV. Load critical dependencies before switching views

When the user navigates to another view, we wait until every critical dependency is resolved, instead of having the widgets load their own data before constructing the interview. This prevents content shifting (good for UX), and ensures we only have to build the interface once (good for performance). I don’t have any statistics or research to back this up, but personal experience tells me it also feels faster, even if the loading time is the same. We use nprogress, which is a subtle progress indicator (YouTube and Github use something similar).

V. Use classes in performance-critical parts

I love functional programming. It is however, worse for performance than classes. We refactored some of our most-used components from closures to classes, which resulted in a performance boost of 100% when constructing a view.

VI. Keep your watchers fast

One of the reasons Angular apps are sometimes a bit sluggish, is the performance of the $digest loop. Most of the optimization strategies focus on reducing the number of watchers, but it’s probably more important to keep them fast. We strive for our watchers to just return a value, and not do any costly computing. To achieve this, we wrote a little library which wraps directive bindings & promises, which allows the consumer to reduce one value or more to a single value. Those reducers are only called when one of the given values changes, and it memoizes the result of the reducer, keeping our watchers super fast. We also almost always use identity checks instead of object equality checks, which are much, much faster.

VII. Prevent unnecessary DOM updates

Another Angular performance bottleneck is the DOM update mechanism: it’s smart enough to only change the DOM when it needs to, but not smart enough to batch updates to prevent superfluous layout trashing. This quickly becomes a performance bottleneck (especially in IE/Edge) when you have to reconstruct the entire interface when changing views — and it’s even worse when you have multiple tables in that view like we do. There’s not really a silver bullet here except for upgrading to Angular 2, but you do have a couple of options:

  • Use bind-once only when you’re absolutely sure the data will never change (updating HTML is slower than having a couple more watchers)
  • Use ng-if instead of ng-show/ng-hide to prevent unused components from rendering

Where are the numbers?

It’s a little difficult to measure the performance improvements, because there’s a lot of factors in play (browser, cache hits/misses, device specs, API response time). My own device is not very representative of our users’, as I have a pretty beefed up laptop, but for me, TTFP is anywhere between 100% (without cache, full page refresh) and 500% (in-app navigation) faster. That’s already pretty good, and I fully expect that to get even better as we upgrade to Angular 2.

Read more about our application here (in Dutch):

And here’s a little more information on the fetching, caching and updating mechanisms we use:

You can follow me on Twitter here:

--

--