tl;dr: less code = less parse/compile + less transfer + less to decompress
This can be a problem, even in first-world countries, as the effective network connection type a user has might not actually be 3G, 4G or WiFi. You can be on coffee-shop Wifi but connected to a cellular hotspot with 2G speeds.
- Only shipping the code a user needs. Code-splitting can help here.
- Minifying it (Uglify for ES5, babel-minify or uglify-es for ES2015)
- Compressing it heavily (using Brotli ~q11, Zopfli or gzip). Brotli outperforms gzip on compression ratio. It helped CertSimple save 17% on the size of compressed JS bytes and LinkedIn save 4% on their load times.
- Removing unused code. Identify with DevTools code coverage. For stripping code, see tree-shaking, Closure Compiler’s advanced optimizations and library trimming plugins like lodash-babel-plugin or Webpack’s ContextReplacementPlugin for libraries like Moment.js. Use babel-preset-env & browserlist to avoid transpiling features already in modern browsers. Advanced developers may find careful analysis of their Webpack bundles helps identify opportunities to trim unneeded dependencies.
- Caching it to minimize network trips. Determine optimal lifetimes for scripts (max-age) & supply validation tokens (ETag) to avoid transferring unchanged bytes. Service Worker caching can make your app network resilient & give you eager access to features like V8’s code cache. Learn about long-term caching with filename hashing.
The Bottom-Up/Call Tree allow viewing exact Parse/compile timings:
But, why does this matter?
When we talk about parse and compile being slow; context is important — we’re talking about average mobile phones here. Average users can have phones with slow CPUs and GPUs, no L2/L3 cache and which may even be memory constrained.
What about a real-world site, like CNN.com?
On the high-end iPhone 8 it takes just ~4s to parse/compile CNN’s JS compared to ~13s for an average phone (Moto G4). This can significantly impact how quickly a user can fully interact with this site.
This highlights the importance of testing on average hardware (like the Moto G4) instead of just the phone that might be in your pocket. Context matters however: optimize for the device & network conditions your users have.
Analytics can provide insight into the mobile device classes your real users are accessing your site with. This can provide opportunities to understand the real CPU/GPU constraints they’re operating with.
Factor in the time it takes to fetch and process JS and other resources and it’s perhaps not surprising that users can be left waiting a while before feeling pages are ready to use. We can definitely do better here.
If script executes for more than 50ms, time-to-interactive is delayed by the entire amount of time it takes to download, compile, and execute the JS — Alex Russell
PRPL is a pattern that optimizes for interactivity through aggressive code-splitting and caching:
Let’s visualize the impact it can have.
We analyze the load-time of popular mobile sites and Progressive Web Apps using V8’s Runtime Call Stats. As we can see, parse time (shown in orange) is a significant portion of where many of these sites spend their time:
Wego, a site that uses PRPL, manages to maintain a low parse time for their routes, getting interactive very quickly. Many of the other sites above adopted code-splitting and performance budgets to try lowering their JS costs.
- Memory. Pages can appear to jank or pause frequently due to GC (garbage collection). When a browser reclaims memory, JS execution is paused so a browser frequently collecting garbage can pause execution more frequently than we may like. Avoid memory leaks and frequent gc pauses to keep pages jank free.
Progressive Bootstrapping may be a better approach. Send down a minimally functional page (composed of just the HTML/JS/CSS needed for the current route). As more resources arrive, the app can lazy-load and unlock more features.
Loading code proportionate to what’s in view is the holy grail. PRPL and Progressive Bootstrapping are patterns that can help accomplish this.
Transmission size is critical for low end networks. Parse time is important for CPU bound devices. Keeping these low matters.
- Solving the web performance crisis — Nolan Lawson
- Can you afford it? Real-world performance budgets — Alex Russell
- Evaluating web frameworks and libraries — Kristofer Baxter
- Cloudflare’s Results of experimenting with Brotli for compression (note dynamic Brotli at a higher quality can delay initial page render so evaluate carefully. You probably want to statically compress instead.)
- Performance Futures — Sam Saccone
With thanks to Nolan Lawson, Kristofer Baxter and Jeremy Wagner for their feedback.