Building cross-framework components with performance in mind, a web component thesis

I recently gave a presentation at Triangle JS about web components and web component interoperability amongst common UI frameworks. I’d like to build on this talk and present some new ideas around web component rendering options and their performance. I’ll also follow up with some sample implementations that address more complex rendering challenges such as transcluded content rendering and rendering common web components across different frameworks.

Benchmarking a variety of JS rendering implementations

A recent post at auth0 surveyed the landscape of commonly used JS frameworks and provided some insight into their individual performance. It was subsequently updated to include some metrics for DOM rendering libraries incremental-dom.js and virtual-dom.js. I took some time to reproduce these tests and verify the results claimed by the author. I also added some metrics for the case that a custom element was used in the rendering template alongside the virtual-dom library. It should be noted that the goal of this suite is to “stress” test the browser’s rending with each library and provide an average benchmark after running five tests with each implementation. The author also did due diligence by adding performance improvements for rendering long lists as suggested by each library (ex: Angular’s track-by expression).

Memory consumed

The following charts total memory consumed by each framework. This chart agrees with the author’s claim that incremental DOM consumes far less memory than other frameworks (making it a great choice for mobile implementations where memory is constrained).

We can also see the number of major/minor garbage collections in Chrome caused by each framework. Garbage collections are caused when frameworks create and destroy lots of new javascript references. These affect other metrics and cause the browser to waste more cycles “cleaning up”. As expected, we see incremental DOM shine again due to it’s optimized for memory algorithm.

Frames Per Second

Maintaining a high FPS is an important metric in describing a framework’s performance during high rendering scenarios. This metric seems to indicate that most frameworks hover around 50 FPS when stressed to their maximum. If not careful, rendering with Angular1 can be downright abysmal due to its long digest loop.

Dropped Frames

This metric tracks the number of frames “skipped” by the browser due to overall performance. Again we see Angular1 struggles when not used properly. Also there is a bit of an impact rendering custom elements instead of other native elements. One can assume this is due to custom element initialization time spent by the browser constructing its definition. Some more commentary on this can be found here. I would expect this to improve as browsers add further support and iterate on the new custom element spec.

Nodes Per Layout

Another important metric is number of nodes rendered per layout. This indicates rendering algorithm optimizations which attempt to minimize the number of DOM nodes changed during each rendering cycle. Surprisingly, incremental DOM and virtual DOM score virtually the same followed by the other frameworks.

Total Javascript Runtime

This metric tracks total time spent on Javascript execution. Again incremental DOM shines here; followed closely by cito js and virtual DOM.

Performance Conclusions and a note about Layout/Paint

I specifically left out Layout/Paint metrics due to an issue described here. I was unable to reproduce the author’s results for this metric, and it appeared that higher layout/paint times actually bode well for each sample (rather than counter like the author described). The idea here being that more time spent on layout/paint indicates higher throughput (which seemed to correlate with FPS); although I’ll wait for confirmation before making any assumptions.

My overall conclusions from this benchmark are:

  1. Each of the frameworks mentioned can perform relatively well when used carefully and when best practices are observed.
  2. This stress test goes beyond what would be commonly observed in your applications, but it’s good to see how frameworks perform under stress when determining what rendering library you want to use in future implementations.
  3. I would strongly encourage you to upgrade to new JS frameworks React or Angular2 when feasible. These frameworks will help you avoid the performance “pitfalls” inherent in older implementations like Angular1 or Ember.
  4. Browser vendors need to continue improving web component performance. Making developer contributed DOM as fast as “native” DOM should be a priority. It appears there is already an ongoing effort to improve this based on the custom element’s v1 spec of element upgrades. There is also a performance hit for Shadow DOM “shadowRoot” construction. In the meantime, we should be more careful to conditionally render custom elements only when needed.

Some Suggested Rendering Libraries and Strategies for Web Component Rendering

Based on these results, I wanted to try and tackle a few common rendering problems such as transclusion (the inclusion of nested content in a component) and template binding via common UI frameworks AngularJS and ReactJS alongside web components. In these tests, I also chose to exclude Shadow DOM due to it’s current lack of browser support and slow polyfill. I’ll include links which may be helpful for those still pushing ahead with Shadow DOM now though.

Surveying the landscape of libraries which make use of these rendering techniques, I found a few contenders which should prove adequate for providing an optimal rendering path in web components alongside your web application:

  1. SkateJS: Skate is web component rendering library which utilizes incremental DOM for rendering web components. It provides helpful extensions for making interoperability less challenging and weighs in at only 4k min+gz. Its core contributor, Trey Shugart, works at Atlassian and sits on the contributors list amongst Google, Microsoft and others.
  2. PreactJS: Preact provides a minimal subset of the virtual DOM JS implementation used in ReactJS. It also has helpful extensions for rendering HTML strings and defining custom elements. It weighs in at an astoundingly light 3k min+gz. It has been tested alongside Matt Esch’s “heavier” virtual-dom.js implementation and was very comparable. Considering these libraries implement virtually the same solution, I opted for the lighter alternative.

These libraries should provide a great platform for building optimal components in your framework implementations. Before I dive into a few examples and weigh pros/cons, I’d also like to note that in most cases you will likely be able to get away with vanilla javascript when building web components (as noted in my Triangle JS presentation). This is also clearly observed by peaking into the source of the project, a highly respected web component framework that is cross-framework compatible and works well on mobile/hybrid apps. It should also be noted that these rendering frameworks will likely be desirable only when rendering a lot of DOM inside a component.

Transclusion & Template Binding

Transclusion allows us to take DOM rendered inside of a component by a framework and drop it into the component’s internal DOM structure where we’d like. Given that custom elements behave exactly like “real DOM”, this is fairly trivial in most cases. I wrote up an example of this here with custom elements and AngularJS reimplementing the Patternfly List View. I also did the same thing with React here. What should become immediately clear is that commonly observed patterns with Angular such as “ng-model” bindings and “ng-repeat” loops still work just as well with web components. You can also easily still bind events and provide a means to send messages amongst components.

Building on this example, I took the dropdown widget (kebab widget) and implemented its core widget structure using Preact. You can see this here. What’s notable about this is I’ve taken the dropdown items rendered in the DOM and transcluded them into the Preact components “render” method (its children). Another approach to this is using custom element attributes (which pass in the list item data instead). You can see this in action here. One challenge with this approach is the Preact component lifecycle will likely not match your framework’s, so you’ll have to ensure proper handling of attributes at all times (note the try/catch of attribute json, which handles the case that the data has not yet rendered by Angular). Although with careful handling, this is still completely possible.

If you’d like to look at a simpler example that shows a few Preact use cases with Shadow DOM, take a look here and here. These were written by the author, Jason Miller.

A similar example I wrote utilizing SkateJS to render an array of attribute data can be found here. One thing to note here about Skate is that it’s not exactly spelled out in the docs that you must “re-render” any transcluded content (light DOM) in the “render” method of your component, as Skate will replace it after its rendering cycle. You also need to give it some options to remove the Shadow DOM shadow root which it inherently implements. You can see some commentary on how to do this here.

What about Polymer?

If you’re up to date on the web component scene, you’ll likely find it amusing that the Polymer library was not mentioned in this post until now. While its goals are to become a lightweight extendable framework sufficient for building a variety of components, I feel it’s missed the beat here and made too many assumptions for most framework authors to embrace. As spelled out in a recent post by Rob Dodson, the gradual adoption of web components is ideal for most and should be tempered with ongoing browser implementations of the web component specifications. While most vendors have agreed and started implementations for the Custom Elements, Templates, and Shadow DOM specs (notably Apple’s Safari), the HTML Imports spec, heavily relied upon by Polymer, has not yet been agreed to by other vendors (i.e. Apple, Mozilla), and should be a cause for concern (who wants to use a polyfill forever?). It’s also evident that ES6 module/code splitting techniques can lead to very performant solutions as well with HTTP2 and web components as noted by Google’s own Addy Osmani. I’d also like to point out that Polymer’s current rendering performance has been questioned at times and led to quite a few debates on Twitter.

I’ll close on a positive note about Polymer though and share my support that it definitely provides a revolutionary approach in its implementation of atomic design principles and does so in a very forward looking way. I’ll be watching this topic closely and will be happy to jump onboard once other browser vendors have too.


In conclusion, web components have come a long way, and some implementation and interoperability strategies are definitely starting to take shape. I’d love to see many of the web’s current jQuery plugins become optimized web components having their own lightweight and performant implementations that are easily shareable and extendable across frameworks. Providing these “leaf” nodes as optimized components with a reduced set of dependencies should make them highly usable in many applications (for example, think how performant a tree view would become with virtual DOM). Web components also do a lot in the way of helping us encapsulate styles and behaviors (but I’ll save animation for a future post).

‘Til next time!