Journeying from Ionic 2 (beta) to Ionic 3
In Part 1 of this series, we looked at some simple optimisations that we could apply to the Odecee Tech Radar app. Those optimisations were not Ionic-specific, but they provided the biggest benefit-for-effort.
In this article we will look at some Ionic-specific and some other general optimisations which we made to the app, and how they improved the app’s performance.
After the optimisations we made in Part 1, our network performance looked like this over a wi-fi network:
Over a wi-fi network, this is not great. On a mobile device and a slower network, it could easily take more than 20 seconds to load for the first time. Looking at the above graph, there is one obvious culprit —
app.bundle.js is 1.3 megabytes. This is simply too large. (Additionally, you may notice that
vendor.bundle.js is suspiciously small. This appears to be a build-config problem).
We needed a way to make these files smaller. Looking inside
app.bundle.js file, we immediately noticed that there was no minification being performed on the file. Great — we can fix that. But there was another technique we could use if we could migrate to the full version of Ionic 2 (or Ionic 3 as-of the time we commenced this optimisation process): Angular’s Ahead-of-Time (AOT) compiler. Using AOT compilation should reduce both the byte-size and startup-time of the app. So we took the plunge…
Upgrading to Ionic 3
Before the upgrade
One of the biggest issues the development team faced when building Tech Radar was that they were working with a beta-version of both Angular 2 AND Ionic 2 (from July-Dec 2016). There was an enormous number of bugs which they found which were known-bugs, plus a few bugs which were not-so-well-known. The team did an amazing job producing a working application despite building on top of two unstable frameworks.
A decision we made early-on was not to invest in the constantly-changing unit-testing approaches for Angular 2. Those who tried to keep up with Angular 2 during this time faced many breaking changes with each beta & release candidate. We wanted to avoid this churn. So we invested in writing system tests (browser tests) with the newly released CodeceptJS instead of unit tests.
After an early-but-failed-attempt to upgrade to Ionic 2 in November 2016, the team re-attempted the upgrade to Ionic 3 in June 2017. As well as framework changes, we were also dealing with Angular-platform changes. Since Angular 4 had been around for a few months by this time, there was much better documentation available which allowed the upgrade to take place successfully.
The main reason for wanting to upgrade was that the newer tooling provided:
- AOT compilation (for smaller runtime code)
- Webpack 3 (for tree-shaking & better module bundling)
IonicPage, a new API that provides both deep-linking functionality (for inviting people to a specific group or radar) and code-splitting (allowing the initial bundle to be slightly smaller)
- Service worker support is built-in (although we ended-up using WorkBox instead)
- Better error logging and debugging support
The approach we took to successfully upgrade the application was to generate a new project, then bring the old code into the new project. That way we could ensure that we had an initially-working setup and could fix the old-code as it was integrated into the new project.
Reducing bundle size (minification)
After minification (using Webpack 3): 334,538 + 1,087,191 = 1,421,729 bytes.
A happy side-effect of getting deep-links to work was that each page that had a deep link was also lazily-loaded! That’s how Ionic’s
IonicPage class works; it creates a split-point which WebPack sees and creates a JS bundle containing just the code needed for that particular page. The effect on performance was minimal though.
Reducing the colours in
After upgrading to Ionic 3, we noticed something strange. Our CSS file had grown from around 500KB (which is still pretty large) to 2.6MB. <Insert “I know, right?!” clip again>. Suffice to say, this is a completely ridiculous amount of CSS. I don’t care if you are deploying your app to an AppStore, this amount of CSS is a code smell which required investigation.
It turns out this was known Ionic behaviour. A bug? Let’s just say the Ionic team are working to improve this.
We reduced the size of the CSS file to 369KB by simply changing
primary: color($tr-colors, brand-primary),
... // 20 other colours
…to simply this:
primary: color($tr-colors, brand-primary),
So keep an eye on the number of colours in your
Adding a Service Worker
The primary benefit from using a service worker is that it can cache your static assets (your “app shell”), which means the browser can load your application faster (from the cache rather than from the network). But it is also a key ingredient of Progressive Web Applications — a set of technologies which allow features like offline-mode (allowing you to use the app when there is no network or limited network), notifications and installation of a web app directly onto a mobile device.
Ionic has some built-in support for implementing a service worker, but we found that Workbox was a better (& newer) tool which simplified our implementation. For our app, we wanted to:
- Pre-cache all the static files (every file in the
- Go to the network-first for requests to our API, and then cache the responses (for a future offline mode feature)
(We also added rules to cache some non-local fonts plus the
ionicons.woff2?v... font, as Ionic appends the version number to the end of it in the CSS reference to the file 😒).
Below is an example of our
One of the downsides of (browser-based) system-tests is that they take a long time to run. On a 16GB MacBook Pro, 95 tests took around 8mins to run (about 5sec per test). So we investigated CodeceptJS to see if it could run tests in parallel. Turns out it can :)
The tricky part with running things in parallel is discovering:
- how many tests can be ran in parallel
- how many tests can be run in parallel consistently (stability)
- will this work on continuous integration servers
To mitigate the risk of CI not working, we kept the CodeceptJS config that allowed the tests to run in sequence, so that if nothing works in parallel, we still can run the tests. As it turns it, the most important factor in getting the tests to work on CI was memory. The more memory, the less likely that Chrome would crash.
To determine how many tests could be run in parallel, we started measuring the time taken to run different numbers-of test-suites:
All tests were run using headless Chrome 60. Each test-suite contained 1-to-10 tests; so each suite was not directly-comparable to the next suite. YMMV.
When running 10 test-suites in parallel, the CPU was maxing out. When running 15 suites in-parallel, we started to get test instability. So based on these numbers, we opted to group the test-suites into 5 groups and see if we could find a stable-yet-performant balance. After a little trial-and-error, we found a combination of test suites that took no longer than 2mins 15sec to run (1.42sec per test), a time reduction of almost 72%.
Let’s look at the best part of this article, the final results. Below are some graphs illustrating how the performance-profile of the application changed after applying each set of optimisations. The data is presented in the following order (left-to-right):
- Pre-Opt — pre-optimised (initial) version of Tech Radar
- Post-opt Part 1 — the optimisations applied to Tech Radar after Part 1
- Post-opt Part 2 — First Load — all the optimisations applied, measured when the app is visited for the first time
- Post-opt Part 2 — Subsequent Load — all optimisations applied, measured on subsequent visits to the app
This graph is interesting because the “Post-opt Part 2 — First Load” optimisations actually increased the number of requests! The additional requests were due to code-splitting (one extra file for the first page) and the service worker (which also loads a Workbox runtime file) — 3 extra files.
For subsequent loads (the last column), there should be zero network requests. But there’s actually 6. Because Chrome specifically doesn’t route favicon requests through the service worker. Those 6 requests comprise the 37kB downloaded (see next graph) every time we load the application even when using a service worker.
This chart illustrates three important points about optimisation:
- Optimising the performance of an application has significant benefits to users and to the companies hosting the applications. Less data usage leads to lower cost of ownership and better user experience. Win-win.
- Diminishing-returns: the benefits of optimisation diminish as more optimisations are applied.
The data for this graph comes from an average of 5 test runs (7 test runs were performed, then the min & max outliers were ignored. The pre-opt data is the exception — only one measurement was taken — as the old version of the app was removed before this article was written).
These are wonderful numbers! 😁 They illustrate the benefits of minification (Part 1 versus Part 2) and service workers (first load versus subsequent load). However, the impact of the service worker is hardly noticeable on such a fast network.
If we run the tests again over a slower network to measure the load events (using the same test methodology), we can see more clearly how the service worker improves performance:
The benefit from using service workers increases as the network speed decreases.
What would *you* recommend to further improve the performance of the app?
The production version of the app is not yet ready as of the time of writing. But when it is I’ll update this post.