JavaScript on the Desktop, Fast and Slow

Four Tips for Faster Electron Apps and Node.js Tools

Too long? Check out the tl;dr at the bottom. Too short? Join us in January 2019 at Covalence, a conference about JavaScript on the desktop, for performance talks from experts.

On the desktop, JavaScript is a little bit like CGI – when noticed, it upsets people. “Why didn’t you build this in [insert another language here]”, they’ll yell from the HackerNews and Reddit rooftops.

I make the comparison with CGI because I am convinced that developers (and certainly end users) do not realize just how prevalent JavaScript on the desktop is. The user interface in Battlefield 1, for instance, is built with React, TypeScript, and MobX. Roughly 1,000 components make it possible.

Nvidia’s Geforce Experience, the app included with Nvidia’s drivers on Windows, ships with Node.js – a peek at its source shows it is used for game streaming, background tasks, and tons of other little things.

Adobe ships the Chrome Embedded Framework and Node.js with all its Creative Suite applications like Photoshop, Lightroom, or Premiere. The combination enables cross-platform extensions.

And finally, all the Node.js tools and the many Electron apps. Popular examples are Visual Studio Code, Slack, or Skype, but there are plenty of lesser-known examples, like the installer for Visual Studio (yes, the big behemoth is installed by Electron).


How to Improve Performance

Whenever Electron comes up, so does performance. It’s an overloaded term, but to most people, it means one of three things:

  • Memory: The lower the memory usage, the better.
  • Energy: The less battery your app uses, the better.
  • Speed: The less time it takes for operations to succeed, the better.

With that in mind, here are my personal top four “easy things to do” to make JavaScript on the desktop more performant.

1) Modules

Let’s check out some code many of us would write: In this example, we’re simply trying to figure out whether or not the dotJS homepage is reachable.

const isReachable = require('is-reachable')
async function isDotReachable() {
const dotReachable = await isReachable('dotjs.io')
  console.log(dotReachable)
}

As a community, we tend to put our require() calls at the top of files. That works for servers, but is problematic on user machines: Unless compiled or transpiled¹, require() is not a compiler directive. It’s a heavy operation that costs time and resources.

First, the loader behind require() does a lot. Requiring a module involves a lot of disk i/o and can by itself be a slow operation. Secondly, loading a module involves not just finding it, but also executing its entry script.

To resolve the value of const isReachable, the index.js of is-reachable is executed. There, all of its dependencies are resolved. One of them is port-numbers, which loads two JSON files as part of its entry script.

Our example looks innocent, yet parses 94k lines of JSON. That’s quite the task, severely reducing startup speed.

One quick way to improve performance is to allocate and use resources only when you actually need them. require() caches calls, so you can move the statement into the function. If you prefer having dependencies listed at the top of the file, consider overriding require() with a lazy loader, brought to you by the developer behind is-reachable ².

// Don't require the module until you need it
// require() caches results
async function isDotReachable() {
const isReachable = require('is-reachable')
const dotReachable = await isReachable('dotjs.io')
  console.log(dotReachable)
}

To its credit, port-numbers has actually applied this method. is-reachable has applied an even better fix though and simply removed port-numbers. Modules are a convenient shortcut if resources do not matter, but if you’re out for performance, simple implementations using built-in methods almost always outperform dependency trees:

// Do you even need a module?
function isDotReachable() {
return new Promise((resolve, reject) => {
const http = require('http')
    http.get('http://dotjs.io', () => {
resolve()
}).on('error', (e) => {
reject(e)
})
})
}

Now that we’ve recognized the performance cost of require(), can we take some load off its shoulders? Node.js, Electron, and Chrome use the V8 JavaScript engine, which performs just-in-time compilation to turn JavaScript into something your machine can execute. That introduces a significant overhead – and if your users execute your code more than once, it might make sense to keep a copy of the compiled code around.

That concept is called “code caching”. Chrome has started to automatically code-cache the JS found on popular websites, but you too can cache your Electron app’s and Node.js script’s code.

The easiest way to make use of code caching is to use the module v8-compile-cache, which has cut Yarn’s load time in half.

2) Painting

Painting is the process of taking HTML, CSS, and JavaScript — and turning it into pixels. If your users use your app for more than a few minutes, make sure that painting is as cheap as possible, and secondly, that it is only done when actually required.

In my experience, optimizing paint performance isn’t black magic, it’s just not often considered by Electron app developers. Chrome has excellent developer tools, and the Performance Monitor in particular makes it easy to spot rendering issues. Since I presented these tips first at dotJS, I took the liberty of using the dotJS homepage as an example.

What’s happening here? Each green flash is a paint, the right side shows how the page uses my 2018 MacBook Pro’s resources. There’s a fancy animation at the top that’s implemented in JavaScript. The problem: It causes the whole page to repaint, multiple times a second, even if the animation isn’t even in view. A look at the CPU counter in the upper right shows that my computer is quite busy repainting otherwise unchanged pixels on screen.

Open up the Performance Monitor and check out your own app’s rendering performance. You can likely improve things without changing the user experience.

3) Not All Code Is Equal

In my experience, we in the JavaScript community rarely run our code through benchmarks. There are, of course, exceptions, but I can’t say that the majority of my modules come with a benchmark. In opposition to most other development environments (say, Unity or Visual Studio) most of our popular editors and IDEs do not come with benchmarking tools.

If you’re writing methods that are only executed once, that’s probably fine. For apps, that’s rarely the case — and calls happen all the time. Not comparing our implementations with alternatives is a problem. Let’s look at a simple example:

// Option 1
const divs = document.querySelectorAll('.test-class')
// Option 2
const divs = document.getElementsByClassName('test-class')
// 🤔 Which one is faster? And by how much?

Run the benchmark on your own machine, but here are the facts for mine: querySelectorAll() is a 99% slower. Our old friend, getElementsByClassName() ran more than 29 million times before the newcomer has even broken 400k executions.

Without checking your assumptions and benchmarks, your apps are likely to contain plenty of code that is 99% slower than other implementations. Do not assume that newer methods are faster. Writing fast JavaScript is very much possible.

There are cases where JavaScript simply doesn’t cut it. For that reason, both Electron and Node.js allow the occasional native code. Thanks to excellent support for native addons in Node.js, using C, C++, or Objective-C and calling it from JavaScript is doable. You can even create fully native user interfaces, if that’s something you’re after.

As an alternative, consider WebAssembly and Rust. Not only is it the cool thing to learn in 2018, it also comes with excellent JavaScript interop tools. Check out the following example, which is compiled with Parcel:

// Import a wasm file like it's no big deal
const { add } = await import('./add.wasm')
// 🙀 Or, import straight from Rust
const { add } = await import('./add.rs')
const result = add(2, 3)
// ./add.rs
#[no_mangle]
pub fn add(a: i32, b: i32) -> i32 {
return a + b
}

4) Respect the Application Lifecycle

As web developers, we’re used to the browser’s paternal instinct: Whenever the users attention wanders away, so do the resources allocation to a browser tab. While operating systems increasingly employ similar techniques, it’s still up to app developers to make sure that you’re not hogging resources while the app is minimized or in the background.

There are plenty of events to listen for (switching from AC power to battery, user presence, network connectivity, system load), but if you’re just getting started, consider at least reacting to your app’s visibility. If your app is completely hidden from view, consider stopping animations and increasing the interval of polling operations.

document.addEventListener('visibilitychange', () => {
if (document.hidden) {
// Suspend all expensive operations
// 🔪 setInterval()
// 🔪 Animations
// 🔪 Not-urgent network requests
} else {
// Continue!
}
})

If your application is likely to be running in the background, think about ways you can put your app “to sleep” while it’s there.

These four steps are good starting points to being thoughtful about performance in your Electron apps, Node.js CLI tools, and other uses of JavaScript on the desktop. Thanks for reading!


In Summary

JavaScript on the desktop is everywhere, including Battlefield 1, the Creative Suite, or the Visual Studio installer. To hide yours as well as good CGI in movies, consider these four tips:

  1. Don’t call require() until you need the dependency. Make use of V8 code caching.
  2. Measure and reduce paints and layout calculations.
  3. Benchmark your code, double-check assumptions, consider native addons and Rust/WebAssembly.
  4. Respect the application lifecycle and consume fewer resources when the app or parts of it are not actively being used.

[1] If you are using WebPack or similar module bundlers, you are likely using a custom loader. That does not guarantee that it’s performant.

[2] I’m well aware of the irony behind recommending a module that helps with loading so many damn modules.

Photo by Patryk Grądys