Introducing Incremental DOM

Over the last few years virtual DOM implementations like React, virtual-dom, Glimmer and others have gained major adoption and changed how developers think about their interaction with the DOM.

In our own work with virtual DOM implementations we have found 2 major issues that we set out to fix:

  • We really like templates. Our designers like templates. Lets make sure we can continue to use our existing templating languages.
  • Performance, especially on memory constrained mobile devices, can suffer when large virtual DOM trees need to be updated a lot.

Incremental DOM is our work-in-progress solution to those challenges. Its primary properties are:

  • Designed as a compilation target for “text that happens to be HTML” templating languages such as mustache.js.
  • Minimize new memory allocated each time a template is re-rendered (aka in virtual DOM speak: a new virtual DOM tree is created and diffed against the current tree).

Its API may seem a bit awkward at first — think of it as ASM.dom. It is not meant to be directly used by programmers, but instead it is designed to be emitted by a template compiler or other higher level API.

Reducing memory usage

Traditionally virtual DOM implementations work in a 2 phase approach whenever it needs to re-render the virtual DOM:

  1. Render an entirely new virtual DOM tree.
  2. Diff the tree against the last known virtual DOM tree and apply those changes to the physical DOM.

Inherent to this approach is that a new virtual DOM tree is allocated for each render operations that is at least big enough to hold the nodes that changed and often a bit bigger.

Incremental DOM changes the model to be a single phase:

  1. While creating the new (virtual) DOM tree walk along the existing tree and figure out changes as you go. Allocate no memory if there is no change; if there is, mutate the existing tree (only allocating memory if absolutely necessary) and apply the diff to the physical DOM.

We put the (virtual) in parentheses because we have found that, when mixing precomputed meta info into existing DOM nodes, it is actually fast enough to use the physical DOM tree instead of relying on a virtual “shadow” tree. But that distinction is not inherent to our approach.

Being a great compilation target

Virtual DOM APIs tend to be descendants of the document.createElement based DOM APIs. One of their inherent assumptions is that for HTML elements with children, you can

  • immediately figure out what those children are
  • and actually know where that element is closed.

Both those assumptions are not true in a generic templating language: E.g. it may allow you to put some <section> tag in one template, then render a few children, but never close that <section> tag.

Example template with an unclosed HTML tag.

It (hopefully) documents that the caller of the template has to close the tag themselves. This may not be the greatest idea, but there are some reasons to do it (for example for chunked rendering on the server) and, since real templates libraries support this, we want to support that.

Incremental DOM embraces the ugliness of Real World HTML™ by breaking its API into pairs. One API to open a tag (elementOpen) and one to close it (elementClose):

Example codegen.

For a more complete example, see this gist.

That way as long as things balance out eventually they continue to work. For similar reasons the API has dedicated support for attribute creation for those cases where your template wraps entire attributes in if-statements (or loops for which ever reason that may be a good idea :)

FAQ

Is this open source?
Yep.

Is this faster than virtual DOM implementation X?
Probably not in the general case, there is always room for optimization. But given the drastically reduced memory usage it is easy to construct benchmarks where Incremental DOM wins; on the other hand if your benchmark is CPU bound Incremental DOM may be slower since it may not be fully optimized. Whether that applies to your application depends on how CPU or GC bound your application is.
Anyway, here is a link to the vdom benchmark. Incremental DOM does quite well.

Do I still need to implement shouldComponentUpdate or a similar mechanism?
Most likely yes. No work is better than work with less memory usage. You do get away with a bit less perfect “shouldComponentUpdate” if actual changes to the DOM are minimal. Imagine you re-render an animation at 60fps and only change e.g. a few style translations per frame. With Incremental DOM the heap allocated memory per frame would be minimal and thus the those long GC breaks busting your frame rate would be more rare.

Do you use this in production?
No, but we have every intention to do that.

Where do you actually use this?
We are building a new JavaScript backend for the Closure Templates templating language. Follow along on Github.

How does this relate to virtual DOM implementation X?
Incremental DOM is a very low level library. It could be used as a building block for other, higher level, virtual DOM APIs. We would love to see a JSX implementation based on it.

How big is it?
Currently 2.6 KB after minification and gzip.

How can I help?
If you work on a templating language we’d love to see Incremental DOM adopted as an alternative backend for it. This isn’t easy, we are still working on ours and will for a while, but we’re super happy to help with it. Docs are here. The steps to add Incremental DOM support to a templating language are basically: 1. Add small hacky partial HTML parser to it. 2. Change codegen to emit Incremental DOM API calls based on that. See this gist for a complete example.
Contributions to the core library are welcome as well, of course!

Show your support

Clapping shows how much you appreciated Malte Ubl’s story.