Pavlik Kiselev
Jan 26 · 6 min read

About one and a half year ago I happened to join ING (as part of Frontmen) to help them with their Polymer and Web Components adventures. Throughout this time I learned a lot about the internals of Polymer and would like to share some insights with you. This article assumes you already have some knowledge of Polymer and experienced performance issues in the past. It’s not intended to disgrace any kind of work of Polymer/Web Components team — more like to point out the most powerful but computationally intensive parts of the framework. While the idea to #UseThePlatform can empower developers to build applications quickly, it has tradeoffs in old browsers that you should be aware of.

What is Polymer

The main idea of the Polymer is to embrace the modern capabilities of the Web Platform. Modern capabilities — modern browsers. For the rest (yes, I’m talking mostly about you, Internet Explorer 11) the set of polyfills is used to simulate the modern platform. We can imagine the framework as a three layers cake (or lasagne, if you prefer). A layer of polyfills, a layer of Polymer and layer of the elements built with Polymer to assemble the required application.

Layers of the Polymer inserted in the poorly formatted table

What is Polymer Element

First of all, let’s take a look at what parts the polymer consists of. The simplest way to do so is to take a Polymer element and split it into logical blocks. Don’t pay much attention to it right now — we are going dive into the details of each block later where I will repeat the code

Now, let’s take a closer look at every part separately.


Shadow DOM hides the part of the DOM-tree from the host. It hides everything — elements, styles, and events. To achieve this set of features, the polyfill should do some sophisticated magic.
To hide the elements it performs:
- Updating of all IDs to make them unique.
“id” attribute can be used for anchoring the elements on the page (<a href=”#id-of-the-element”></a>) or be pointed by labels and aria attributes of related form elements. But they must be unique across the document. With the help Shadow DOM we can have multiple documents on one page.
- Patching querySelector, querySelectorAll and such to exclude hidden parts from the result.
Again, all methods returning NodeList must now exclude nodes of other Shadow DOMs

Methods returning the Node/NodeList and traversing the DOM together with some others got polyfilled

To hide the styles it does:
- Parsing of all styles to be able to use CSS custom properties and CSS mixins (deprecated now). Shady CSS polyfill builds an abstract syntax tree with CSS custom properties and mixins connected to the CSS rules definition to update them quickly if a property is changed. Besides this,
- Modifying of all styles — it adds the name of the host element to every rule definition to ensure no collisions between different elements
- Adding styles to the <head> of the application
- Adding to each tag within the element a CSS class with an element name

In other words, all styles-related modifications are similar to any other approach to exclude collisions — BEM, CSS modules, but in runtime.

To hide the events the polyfill patches addEventListener/removeEventListener from elements and add methods of Event/CustomEvent/MouseEvent.

Because of the amount of work needed to be done in runtime, this is the root cause of potential issues with the Initial Render Time or Time to Interactive of the application.

Custom Elements

Custom Elements help developers to create their tags. It can be something simple like a button with an icon or more complex like a full-featured chat.
The main idea of the implementation is to use Mutation Observer which listens to every modification of the document and then to construct the Custom Element with all required lifecycle methods. The job is done with batches with the help of Promises. However, for ancient browsers like Internet Explorer where Promises are polyfilled too this part slightly hits the performance. An excellent article of Jake Archibald is explaining in details the differences between Good Old Browsers and Modern ones regarding the JavaScript execution. Long story short — Promises in Internet Explorer are resolved in the next task, not microtask. In theory, it’s not noticeable unless you have several tasks queued one after another


Template element is vastly supported nowadays. Approximately 90% of all browsers have it. An exception is Internet Explorer family. The polyfill of HTMLTemplateElement is based on the fact that every valid HTML tag is treated as HTMLUnknownElement by default which extends HTMLElement and has all basic properties and methods. The only things left are to implement “content” property of the template with DocumentFragment (supported in IE), inner/outerHTML and patch methods working with the DOM to handle dynamic creation of templates. Of course, there are few quirks for different browsers, but it does not have a penalty for performance.

The slowest part is, probably, a loop through the root nodes of a newly decorated template

Polyfills connectors

This is a very simple part — no prototypes modifications, no severe elements modifications. Polymer just calls right things at a right time and adds mobile gestures support for old mobile browsers, which is not needed nowadays. When Polymer was released, some mobile devices did not support “click” so frameworks had to be creative to detect it from the tap. Modern browsers do not have problems with that.

Property bindings

Usually, applications have a state — some data from API. So to be able to use it within out templates we need to bind it. In Polymer this is done in a unique manner. It iterates through all elements and attributes of all templates and collects information whether this is a property, method or just a string:

It has some performance implications because the Polymer semantics of the elements is built in runtime from purely string-based HTML. You can think of it as of building of a light version of virtual DOM in runtime.

Iron and Paper Elements

Shady CSS+DOM section tells us, that maintaining the Shadow DOM is very expensive. So, what is the solution for this? Correct — create a set of reusable components which should be parsed only once. No matter how many hundreds of buttons are in your application. If all of them are <paper-button> then Polymer the page is fast. However, if you have <my-button-red>, <my-button-blue> and <my-button-green> all these three buttons will be parsed separately even though inside it could be no difference.

What else in having Shady DOM+CSS can be tricky? Since the CSS is parsed on the client side it should as thin as possible. For example, we can strip all vendor prefixes. But how then it can be used in old browsers. By providing a component with vendor prefixes! The downside of this approach, that all properties should be covered. For example iron-flex-layout

iron-flex inside

So, if you need just a small icon you will have 400 lines of all flex definitions. Same goes for paper-input, which requires paper-styles with color.js inside, where 300 lines of all Material Design colors definitions.
If you have a big application with flexes here and there and you use default colors of material design it pays off, otherwise please be careful with all additional code comes to your application without you needing it.

Build System

Let`s say we have an application with the main bundle and two lazily-loaded pages — Category overview with a list of Products (route “/phones”) and Product overview (route “/nokia3210”). “/phones” and “/nokia3210” pages have the same “product” element. Because it cannot be anticipated what page will be opened first, the shared component will end up in the main bundle. And because of the runtime computations of Parts 1–9 it will slow down the initial render + time to interactive of the whole app.

So, what?

Okay, now, when we have some insights about the internals of Polymer what should we do?

First of all, be aware of the strong and weak points of each framework before you make a choice for your application. It can be great for fast prototyping but bad for long-term development or the learning curve is very steeper. The size can be smaller but the initial render is still longer.

Secondly, one of the options to actually not care about old browsers. In Chrome or soon all browsers, Shadow DOM will be implemented natively, meaning no need for polyfills.

If you do need to support old browsers, then probably you can try use only some parts of Polymer. No Shadow DOM for example. Or try to use a different set of components (Vaadin is a good example).

Another option is to consider using the successors of Polymer — lit-html and LitElement. It saves performance on layers 4–9, but still, it takes time to apply Shady DOM+CSS.

JS Planet

A space dedicated to a variety of articles about development

Thanks to Mikhail Kuznetcov and Alex Korzhikov

Pavlik Kiselev

Written by

JS Planet

JS Planet

A space dedicated to a variety of articles about development

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade