Server-side rendering web components

Trey Shugart
16 min readAug 4, 2017

Server-side rendering HTML isn’t anything new. It’s something that we did because, at the time, it was the best way to deliver a page to the user. Over time, we realised that doing a round-trip to the server wasn’t the best use of the user’s bandwidth or patience, so we started finding better ways of delivering the front-end and separated it from the backend. In doing so, we lost some important things that we had previously taken for granted. Now, in 2017, we’re looking back to the good ‘ol days for guidance and are once-again looking to the server for answers.

Something that’s come up again and again as an argument against web components is that you cannot render them on the server, thus losing the key benefits we’ve gone back to server-side rendering for. Out of the box, rendering web components on the server is not possible because there is no way to declaratively represent shadow roots and their content in HTML — or to attach them to a host — without executing imperative JavaScript on the client. However, with a little bit of elbow grease, we can make this happen, and more.

Before going into the specifics of how we’re going to do this, let’s look at why we’re grasping at the glory days of web development for reasons other than just nostalgia.

SEO and other bots

There’s so many different bots out there on the internet who’s sole job is to read content on the web and do something with it.

Googlebot

Of those bots, the most famous is probably Googlebot. Googlebot needs no introduction and its rendering behaviour is thoroughly documented. Even though the docs on it are top notch, I still wanted to have a testbed for the work I’m doing on SSR so I set up https://treshugart.github.io for this. My experience is consistent with the docs but with a couple things to note:

  1. Doesn’t support v1 web components (currently based on Chrome 41, so needs polyfills to look correct)
  2. Will index what it reads from the raw response (as opposed to what’s painted on the screen, so doesn’t need polyfills for indexing if you SSR the composed tree)

This is what things should look like (don’t laugh, I’m not concerned with styling or content right now):

This is how things should look. I can hear you chuckling. Stop it!

This is what Googlebot rendered with the polyfills:

The rendering issues are because we need to invoke some imperative code in the Shady CSS polyfill to enable CSS scoping. Alternatively, we could use a CSS-in-JS lib.

The most important part here is what Google ends up indexing as opposed to what it sees. Luckily, regardless of how it looks, as long as the raw response contains legible HTML, it will index what it sees from that. Here’s what Google indexed given the above content when I search for “treshugart”:

Searching for “treshugart” proves successful. Phew!

Bing, Yahoo, Yandex etc.

Though Google is definitely the heavyweight champ of search engines, if you want to extend your reach to everyone, you’re going to want to make sure your site can be read by other crawlers.

A quick search indicates that most of these apparently still don’t execute JavaScript. Even if some of them did, to what extent? What about social media shares that do a quick scrape of a page to preview its content?

There’s a lot of factors to consider other than just initial rendering such as fetch requests and other types of async content. To rely on the fact that JavaScript is enabled seems brittle for something that can be instrumental to the success of a business.

User experience

The other major factor is how responsive the initial loading of a page is. By server-side rendering your content, you ensure the user sees something as soon as possible, even if some of it can’t be interacted with yet. Immediately the user can start planning their initial action, and while that is happening, you can load the important JS necessary to make what you’ve rendered interactive.

One of the upsides to this is that you can pre-fetch requests on the server and pre-render them to avoid unnecessary spinners on the initial boot up. This makes the whole experience feel more snappy. Obviously then your server render performance is subject to the loading of those fetch requests, so it may make sense to not load everything up front.

The obvious downside to deferring the loading of your JavaScript is that the rendered content looks like it can be interacted with, which may not be the case. For built-ins like links and selects, they’ll be fine; basic functionality will still work. But if you have any custom UI components, or built-ins that require JavaScript, users won’t be able to use them as you’ve intended until the JS catches up. Doing your part to do things like using semantic HTML and wrapping input fields in forms all help make this a better experience for the user.

I’ve heard some say that SSR works well for websites — where users consume content — but not so much for apps that are highly interactive. I think that for the most part SSR can work well for both because at the very least you’re delivering content to your users sooner. The thing to consider here might be how much of an impact it will have in comparison to other optimisations you are able to make.

Statically SSR’ing a page (like static-site generation) is great for pages that don’t have dynamic content, or you want to pre-render given a certain input. It can yield consistent and high performance results with low overhead. Dynamically SSR’ing a page has its own performance caveats depending on how many RPS (requests per second) your server can handle, so makes sure you do your homework if you want to go down this particular road.

There’s trade-offs to both server-side rendered and client-rendered apps. As Tom Dale points out in the context of Ember FastBoot, it’s about “bending the curve” of these trade offs to get the best from both worlds. There’s no one magic combination of client / server work that will work for every site or app. The moral here is that nothing is a silver bullet that will keep you from having to measure and care. SSR is just another tool you can use to help, if you need it.

Even if this is you right now, you’re probably still wondering how to actually do it.

The plan

The problems are clear and libraries that present their own component models have been quick to solve these with SSR, proving it’s a valid solution. So how do we do this with web components? Where do we even start?

Rendering web components on the server is not possible because there is no way to declaratively represent shadow roots and their content in HTML — or to attach them to a host — without executing imperative JavaScript on the client.

Before we go any further, we should define some terms:

  1. Serialise: taking a DOM tree and converting it to an HTML string to be rendered on the client. It should contain all of the necessary information for the client to perform rehydration (see below for definition) of the shadow roots.
  2. Rehydrate: attaching a shadow root at the declaration point in the rendered HTML and reverse-engineering the composed tree back into light DOM.
  3. Web components: combination of both Custom Elements and Shadow DOM.

From the start, the plan has always been to stay as close to the metal as possible and to build the bare-minimum in order to achieve our goal, which is:

  1. Running DOM (and web components) in Node
  2. Serialisation of a DOM tree to an HTML string
  3. Rehydration of the serialised string on the client back into real DOM

For number 1, the hope was that one of the existing Node DOM implementations will eventually have support for these. However, since this was an investigation into the possibilities, I wanted to start small and build up so I could move fast and incrementally. The goal here wasn’t to have a full Node implementation, but just enough to give us a practical first iteration.

For 2, there was a couple of possible outcomes. It could either be done completely in user-land, or if 3 could become a standard (more on that in a sec), it might make sense to also be able to standardise how a DOM tree should be serialised for rehydration. That might make more sense because it’s tightly coupled to 3. So 1 and 2 can all be contained in a Node DOM implementation — 2 being based on standards and browser-implemented — and 3 can be done on the client. The ideal result here then is that there’s no non-standards libs other than the stuff you need in Node, of course.

For 3, the hope was to come up with the most practical implementation for now, but to also provide the ideal path — either via an alternate algorithm, or documenting it — to present to standards bodies. The end result should support declarative (as in HTML) shadow roots, distribution and CSS encapsulation, thus also being able to be run without JavaScript. The limitation of the user-land approach, obviously, is that it requires JavaScript. There has already an issue that was closed due to lack of proposals, so hopefully we can eventually rekindle that with something concrete.

The journey

Unfortunately, Node doesn’t ship with a DOM implementation, but there’s a few libraries that we can choose from. The problem with all of the implementations (at the time of this writing) is that none of them support web components.

JSDOM

The likely defacto standard here is JSDOM due it being so robust, standards compliant, feature rich and possibly the longest living one of them all.

There’s currently an issue and a PR for implementing web component support which isn’t merged yet. Since JSDOM is standards compliant (almost to a fault) the work to implement them as per the spec seems quite daunting.

Domino

Another implementation that’s garnered quite a bit of attention in the past is Domino. There’s a PR to implement Custom Element support, however, there’s no work being done yet on Shadow DOM which is the main feature necessary for SSR.

Undom

Undom hails from the creator of Preact. It’s a less comprehensive subset than Domino and JSDOM. Adding support for things like Custom Elements and Shadow DOM is dead simple because it’s built under a subset of the DOM standards, so there’s less to worry about up front.

Others

There’s other implementations, but for the sake of brevity we won’t mention them. The main takeaway here is that there’s many implementations, with varying degrees of comprehensiveness, and none of them support the necessary features to server-side render web components.

Prior art

ElectronMy initial foray into all of this was messing around with Electron to provide a place where I can run the DOM code (instead of using a Node DOM) which would then serialise the tree and report it back to the host script. Back then, the representation of the serialised tree was much the same as it is currently. However, some concerns here were performance and obtuseness of the API, which ended up not fleshing out because my gut feeling was that this it was a very roundabout way to run some code and get a string back. It always felt like a more direct approach was possible.

Headless Chrome — A similar approach would be using Headless Chrome to do the same thing Electron was doing. The question in my mind is still: how good can performance actually be if you’re spinning up an actual browser, and if you have to do that, how good can the API look for consumers? I haven’t explored this option much either because it felt very similar to the Electron approach.

Server Components —A server-side DOM implementation based on the initial Custom Element APIs. It doesn’t have Shadow DOM support, but that can be implemented fairly easily. The biggest issue I was confronted with was that you can’t just load it it and have it patch all the necessary globals so that you can simply just import your web component code and serialise them. It felt like in order to do what I wanted to do that I’d have to take the project in a different direction.

Scram.js — Also using Electron behind the scenes, Scram.js is geared towards declarative development in Express and rendered using headless Electron. My concerns about performance and API simplicity, with Electron mentioned above made me feel like this possibly wasn’t the ideal approach. We also don’t need the extra features it provides to just do SSR.

What got me to finally push through and come up with something was a gist Justin Fagnani wrote. This method revolved around rendering light DOM next to the <shadow-root /> element as opposed to rendering the entire composed tree and reverse-engineering it. Essentially this is what you see when you open up dev tools on an element that has a shadow root attached to it. For example:

<x-element>
Light DOM
<shadow-root>
<ol>
<li>Shadow DOM
<li><slot></slot></li>
</ol>
</shadow-root>
<x-element>

This is what we ended up starting out with but we realised that when bots don’t execute JavaScript, they don’t have an accurate representation of what should be crawled and content may appear out of order. Without JavaScript, the indexed content (and what the user would see) is:

Light DOM1. Shadow DOM
2.

Which doesn’t really make sense. What we really intend to be consumed is:

1. Shadow DOM
2. Light DOM

While Googlebot might execute JavaScript, I don’t think targeting one implementation — even if its market share is large — is a viable solution. It’s also not practical to say anything that scrapes a page should execute JavaScript or do manual distribution of light DOM in order to accurately read the content.

Given the caveats of all the different approaches, the best thing seemed to shave the Node yak and just start re-implementing the main parts of the DOM APIs we need to get web components running there.

Starting small

One of the most important things to me with this was being able to start small and explore the possibilities. I didn’t want to get ahead of myself and immediately start implementing web components according to the specs (reactions, queues, etc.) in any of these without first knowing what I’m up against and if it’s even worth it. Thus, I decided to take Undom and build a minimal implementation for it.

skatejs/ssr

The result of the initial work is a small lib called skatejs/ssr. It’s not complete yet, but it renders most things you throw at it. For an initial implementation, it seems more than sufficient. Building this has:

  1. Proven that you can, in fact, render web components on the server and rehydrate them on the client.
  2. Provided a basis for further exploration of possibilities (i.e. no JS required for proper styling, CSS encapsulation).
  3. Given us ammo for pitching declarative shadow roots to the standards bodies.

Basic usage

Using the skatejs/ssr package , the following is a basic example of how to render a simple component on the server:

// This provides the DOM API, so load it first.
require("@skatejs/ssr/register");
// Renders the provided DOM tree to a string.
const render = require("@skatejs/ssr");
class Hello extends HTMLElement {
constructor() {
super();
// A shadow root might already exists if it's been
// rendered on the server and you load this on the
// client.
if (!this.shadowRoot) {
this.attachShadow({ mode: "open" }).innerHTML = `
Hello, <slot></slot>!
`;
}
}
}
customElements.define("x-hello", Hello);const hello = new Hello();
hello.textContent = "World";
render(hello).then(console.log);

With the current iteration, if you run this script with node it will output something like (but without the comments):

<!-- This exists to define the rehydration function
so we don't have to duplicate it for every root. -->
<script>function rehydrate () { /* ... */ }</script>
<x-hello>
<shadow-root>
Hello, <slot>World</slot>!

<!-- Inserting a script tag here is the earliest
point where we can attach the shadow root upon
rehydration. This ensures it gets rehydrated as
soon as possible and is also compatible with
streaming. -->
<script>rehydrate()</script>
</shadow-root>
</x-hello>

The components you write aren’t any different from what you’d normally do on the client, you’re just wrapping some server code around it to do the serialisation and output.

Static SSR

If you don’t need that level of granularity, you can even use a script that ships with the library. It allows you to render multiple files of web components where the component you want to render is the default export of each file.

// src/index.jsclass IndexPage extends HTMLElement {
connectedCallback() {
this.attachShadow({ mode: "open" }).innerHTML = "Home Page!";
}
}
customElements.define("index-page", IndexPage);
export default IndexPage;

Then to output an ./index.html file from that component you could just run:

ssr --src src/*.js --out .

This means you can generate an entire site from a set of components with a single command. You don’t need to deliver any custom element definitions for the custom elements that aren’t dynamic or interactive (i.e. only used for templating) because they’re already rendered.

Dynamic SSR

If you wanted to, you could even dynamically render your components in a server like Hapi. The following example takes request parameters and spreads them onto the component that is loaded to render the page, thus giving it the ability to respond to dynamic input.

require("@skatejs/ssr/register");
const render = require("@skatejs/ssr");
const Hapi = require("hapi");
const fs = require("fs");
const server = new Hapi.Server();
server.connection({
host: "localhost",
port: 8000
});
server.route({
method: "GET",
path: "/{page*}",
handler(request, response) {
const page = `./pages/${request.params.page || "index"}.js`;
const Page = fs.existsSync(page)
? require(page)
: require("./pages/404.js");
return response(
render(Object.assign(new Page(), request.params))
);
}
});
server.start(err => {
if (err) throw err;
console.log("Server running at:", server.info.uri);
});

As noted above in the “User experience” section, make sure you measure the performance of your server because this can become a performance bottleneck.

Future-spec compliance

We’re currently using a <shadow-root /> element as a placeholder for where the shadowRoot should be attached because:

  1. It’s clear and descriptive as to where the shadow root should be attached.
  2. A <shadow-root /> element seems like it’d be the most standards compliant way declaratively attach a shadow root.
  3. We want the option to switch to using a custom <shadow-root /> element definition — as opposed to inline <script /> tags — once Custom Element adoption becomes simpler because it yields better overall performance.

However, in the process of doing all of this work, we noticed that while this declarative API works well for rehydration, what if we wanted to declaratively use it in something like React:

class App extends React.Component {
render () {
return (
<div>
<shadow-root>
<slot />
</shadow-root>
</div>
);
}
}

This isn’t specific to React, but it’s a perfect example that shows the issue. The <shadow-root /> element is essentially a “child” of the host element now because it’s visible in the composed tree. This isn’t possible because you need to be able to differentiate between light DOM and shadow DOM and that’s impossible to do if the shadow root is also light DOM.

There’s also other issues. What if you remove the element? You can’t undo a shadow root. What if another root is appended? You can’t have more than one root. A shadow root also requires a mode and it’s impossible to create an element and specify an argument at the same time using document.createElement(). If the shadow root removes itself, its host is mutated and the world as React knew it would have changed. No bueno.

One way around this is to use a shadow-root attribute.

<div shadow-root="open">
<slot />
</div>

The spec poses two questions to this approach:

  1. You cannot undo — or have multiple — shadow roots.
  2. Shadow roots require you specify a mode.

To which we can probably answer:

  1. The attribute would be a one-time deal. Any further mutations to the attribute has no effect, possibly warning if done.
  2. The attribute can default to one of the possible values, also possibly warning if a valid value isn’t provided.

The upside to this would over the current method would be that we wouldn’t have an extra element in the DOM and we wouldn’t be mutating the host by removing the shadow root node. We’ll likely be switching to this method in the near future.

Positive side-effects

SSR’ing is just one thing you can do in Node as a result of having access to the necessary DOM APIs, but there’s so much more that you can do as a result!

Mocha, et al

No more having to spin up a browser to run your tests. If you want to run your tests in Mocha, all you have to do is make sure the necessary APIs are available before running your code:

// mocha test/mocha.jsrequire('@skatejs/ssr/register');describe('my web components', () => {
it('should do something rad', () => {
// do your DOM stuff now
});
});

Jest

Almost immediately I realised that if we implement the necessary DOM APIs for web components in Node that might be able to test them using Jest. The problem was that Jest uses JSDOM as its default “environment” and the docs around how to write a custom environment seemed a bit thin.

After some hacking around, I found that I could simply extend the Node environment they built and hack in our Undom extensions.

I’m super excited about this because Jest is a fantastic testing framework. I still can’t get over the fact that we’ve converted all the tests for SkateJS and several libraries within the SkateJS org to use Jest. Other than the differences between testing frameworks, it’s literally as simple as doing:

// package.json{
"jest": {
"testEnvironment": "@skatejs/ssr/jest"
}
}

Infinity and beyond

Writing libraries is fun but ultimately the definition of success here is if it can be made mostly — or completely — redundant.

On top of seeing support for web components in JSDOM and Domino, I’d like to fully port over our implementation into separate Undom plugins as opposed to bundling it into this library, giving the consumer the choice to use any implementation they want. Aside from SSR, this would enable usage with Mocha, Jest or any other Node testing framework like Tape.

It would be great to get a standard way to serialise a DOM tree into a composed tree. Maybe something like Element.prototype.composedHTML or ShadowRoot.stringify(tree). It may have to take into the possible eventuality of a custom imperative distribution API (links here and here), so being able to do this at the standards level would mean that the imperative APIs would have to consider it, else they’d likely be incompatible.

The biggest thing to take away here, is that we absolutely need a declarative way to represent shadow roots in HTML so that we can service crawlers, present content to users without relying on JavaScript and to be able to declaratively leverage the power of Shadow DOM in other libraries and frameworks. The fact that it’s such a widespread pattern points to the platform to provide primitives for it.

Thanks!

Thanks to Bede Overend, Rob Dodson and Sunil Pai for reviewing and providing constructive feedback on this article.

Thanks to Justin Fagnani for providing the original idea that I pilfered.

A very special thanks goes to Bede (mentioned above) for his extensive support and contributions to the skatejs/ssr library.

--

--

Trey Shugart

Dad. Home builder. Frontend @atlassian. Gamer. Musician. Web component hopeless romantic and everyday React(alike) user. Author of SkateJS.