How I wrote the Fastest JavaScript UI Framework

Bold claim, I know. And tomorrow it might not even be true. But right now with the release Chrome 72, Solid has taken the coveted top spot in the JS Frameworks Benchmark. (Technically #5, but the top 4 implementations are handwritten “vanilla” reference implementations that has the implementor directly managing DOM API mutations.)

A lot more goes into writing a JS UI library than chasing benchmarks. I have the distinct advantage of a substantially smaller user base than most competitors. But I woke up this morning, on my Birthday no less, to see that while I’ve been spending my recent time improving compatibility and adding features, Solid silently crept up thanks to the latest Chrome. It was the realization that as browsers get more optimized the approach taken here will only get better that has prompted me to write this article. I didn’t really invent anything new but it’s the way Solid puts it together that makes the difference.

1. Pre-Compile your Views

This is a big one. While not necessarily producing the least overall code to transfer over the wire the work you do ahead of time pays dividends later. This compilation doesn’t need to happen during the build step and can happen in the client as long as you aren’t taking the cost on each update. The performance hit especially for smaller apps of doing this on the client can be minimal. For Solid I compile with Babel ahead of time but the important part is to do so.

JSX is a blessing for pre-compilation; clear unambiguous syntax trees. Not to say you couldn’t use other methods like HTML string parsing, but with JSX the intent is always clear even when mixed with JavaScript. You don’t need to invent your own convention to indicate what should and should not be parsed. That isn’t to say this isn’t achievable other ways with specialized template files and loaders. You just get so much for free with JSX.

Now one can note that using JSX and pre-compilation is fairly common in modern frontend, but it’s the combination with the other techniques where it shines.

2. Use Optimized DOM APIs

Most libraries use the most optimal methods to apply changes to DOM nodes — the quickest way to render text, set attributes, update styles or classes. Thanks to careful optimization around the longest increasing subsequence algorithm many can apply the minimal number of DOM operations when reordering lists.

However, the fastest way to create nodes to add them to the DOM is through deep cloning and appending the subtree. Not element.innerHTML and not even document.createElement. For many imperative renderers like those found in Virtual DOM libraries it’s harder to determine the shape upfront. That’s because no matter how declarative it appears each JSX element just becomes a function call. The rendered Virtual DOM can be analyzed and its possible to insert some hints into the JSX but the majority isn’t known until runtime.

However, HTML string renderers like those with custom DSL’s and the ones that used Tagged Template Literals do not have this shortcoming as they parse the whole string separating off the dynamic parts from the static parts, able to push the static parts to a template that can be later deep cloned.

Solid takes a similar approach. It’s compiler generates code based on the shape of the JSX separating the static parts into an HTML template that is cloned on initial render and the dynamic parts into optimized code that can run as needed.

3. Use Fine Grained Change Management

A bit more controversial, but this is actually the reason Solid pulled ahead this week. By Fine Grained Change Management I mean event based key value stores like MobX, KnockoutJS, or S.js.

There are faster approaches for benchmarks. In fact taking the 2 approaches above and just running the update cycle top down keeping the cached values via closures to dirty check (*Never read from the actual DOM while doing writes!) is about as fast as things get for most benchmarks. However, at a certain point when dealing with Nested data and partial updates the overhead of subscription creation/teardown is less than the cost of wasted work running top down. The control of being able to set boundaries on change allowed Solid to have no truly weak tests in the benchmark, even if it wasn’t the absolute fastest in all tests.

The same applies to even to older fine grained libraries like KnockoutJS, largely unchanged in the past decade. On one hand you have to scroll all the way right to as the classic library is among the worst performing. However, KnockoutJSX that uses the techniques described here generally outperforms the fastest Virtual DOM libraries in this Benchmark.

The Effect of Progress

So Chrome 72 was released and with it a considerable improvement in node insert performance across the board. Every library has enjoyed at least a 10% performance improvement on insert tests. It was enough to push libraries using the above approach relatively a bit further forward in comparison.

DOM operations are significantly the most expensive part of any UI update and browsers continue to get more optimized. On one hand fine grained libraries excel on the small changes that are “fast enough” in most libraries anyway at least as far as benchmarks are concerned. Even if a browser improves these areas the overhead of the library when present is already considerable. However the areas where browsers can make the biggest improvements(where DOM operations make up 90+% of the overhead) continues to scale down.

So What?

Nothing really right now. Everyone knows Benchmarks, even the most realistic, are still artificial and shouldn’t be the basis for your decisions. Hopefully the trend in browser performance optimizations and the approach taken here is of some interest. Given its performance and the recent fascination with fine grained coding patterns (thanks to React Hooks), I’m sure my work won’t be the last of its kind.

Don’t forget to star Solid on Github.