Web Components and SEO

Kyle Buchanan
PatternFly Elements
7 min readNov 15, 2019

We all understand how important SEO is and how crucial it is for crawlers to be able to understand our content in our sites and apps. I work at Red Hat on the Customer Portal, our support and documentation site, and over the last 90 days, 72% of our referred traffic has come from search engines. That’s a big number. So our technology choices on how we deliver content to our users really matters.

We’ve been heavily investing in developing web components, specifically with the PatternFly Elements project, to deliver a more consistent look and feel to our users across all of the Red Hat web properties. In the process of developing these components, our SEO teams wanted to make sure that using web components on our sites and apps wouldn’t negatively impact the SEO performance of our pages. So we put together a quick test that looked at three different component architecture approaches for developing web components.

The test

We set up a simple page at https://youthful-jang-0343f5.netlify.com/ for validation. There are three components on the page that all have the same job of delivering an H2 with some text. The code for these tests live on GitHub. The goal is to see how different search engine bots crawl web components and to see if changing the architecture changes how the bots see the content. For this test, we have used the webmaster tools in Google, Bing, and Yandex to see how the pages are indexed.

Three different component architecture approaches

Light DOM Approach

The component is just a shadow root with a slot that will accept anything. This is the primary approach that the PatternFly Elements project uses.

class WcHeaderLightDom extends HTMLElement {
constructor() {
super();

this.attachShadow({ mode: "open" });
this.shadowRoot.innerHTML = `
<style>
:host {
display: block;
}
</style>
<slot></slot>
`;
}
}

window.customElements.define("wc-header-light-dom", WcHeaderLightDom);

We created an instance of wc-header-light-dom in our markup like this.

<wc-header-light-dom>
<h2>Header in the Light DOM</h2>
</wc-header-light-dom>

Shadow DOM with a Slot Approach

The component has a shadow root with an H2 and a slot inside the H2 for the text.

class WcHeaderJustSlot extends HTMLElement {
constructor() {
super();

this.attachShadow({ mode: "open" });
this.shadowRoot.innerHTML = `
<style>
:host {
display: block;
}
</style>
<h2><slot></slot></h2>
`;
}
}

window.customElements.define("wc-header-just-slot", WcHeaderJustSlot);

We created an instance of wc-header-just-slot in our markup like this.

<wc-header-just-slot>
Header with Just a Slot
</wc-header-just-slot>

Attribute Approach

The component has a shadow root with an H2 and no slot. We observe the “text” attribute and when the attribute changes, we set the textContent property of the H2 in the shadow root. This is pretty similar to how a lot of Polymer components are built.

class WcHeaderNoSlot extends HTMLElement {
static get observedAttributes() {
return ["text"];
}

constructor() {
super();

this.attachShadow({ mode: "open" });
this.shadowRoot.innerHTML = `
<style>
:host {
display: block;
}
</style>
<h2></h2>
`;
}

attributeChangedCallback(attr, oldValue, newValue) {
this.shadowRoot.querySelector("h2").textContent = newValue;
}
}

window.customElements.define("wc-header-no-slot", WcHeaderNoSlot);

We created an instance of wc-header-no-slot in our markup like this.

<wc-header-no-slot text="Header without a Slot"></wc-header-no-slot>

The end result is a page with three different components that all display an H2 to the user.

The results

Google

Googlebot renders all three approaches exactly as the user would see it on the web. Googlebot is executing JavaScript so all three web components look alike.

Result: PASS with all three approaches!

Bing

Bingbot renders only renders the Light DOM Approach correctly. The Shadow DOM with a Slot Approach at least renders the text but it has no semantic value. The Attribute Approach isn’t rendered at all. Bingbot is not executing JavaScript so the results we see are expected.

Result: Partial Pass

Bingbot doesn’t execute JavaScript so it’s only looking at the markup that’s on the page when the page first loads. Since the Light DOM Approach is using an H2 in the light DOM, we’re still getting the semantic value that we’re looking for from this approach. The other two approaches can have a negative impact on your search performance.

Yandex

When we used the Audit pages for mobile compatibility tool, we saw that Yandex also does not appear to be running JavaScript. The Light DOM Approach is rendering the way we’d expect, but the Shadow DOM with a Slot Approach just shows text with no semantic value and the Attribute Approach doesn’t render at all.

Result: Partial Pass

Yandex, Bing, and all other search engine crawlers that don’t execute JavaScript will look like this.

A look at all of the search engine crawlers

This screenshot where I isolated the plain JS column for the sake of visibility, you can see that only Google and Ask.com crawlers execute JavaScript.

From 2018, source: https://moz.com/blog/search-engines-ready-for-javascript-crawling

What does this mean for developing and consuming web components?

As a developer or consumer of web components, there are two things to consider before jumping right in: context and your search engine traffic.

Context matters

Is your site a marketing site where click-throughs from search engines are tied to generating revenue for your company? If so, you’d probably be most interested in developing and consuming web components that rely on light DOM to deliver your message. This way the semantic value of your content is still crawlable by search engines. Making sure the heading level tags, the links, and other tags are in the light DOM is vital so that all crawlers can understand the content. So the Light DOM Approach is a must for you.

Is your site an app that lives behind authentication? If so, light DOM might not hold as much value for you. For the most part, a web crawler isn’t going to have access to your content and you can also rely on users having JavaScript turned on (there’s some debate about this though). So the Shadow DOM with a Slot Approach or the Attribute Approach, or a combination of the two, could work for you.

Your search engine traffic matters

If you find that 99% of your search engine traffic is coming from Google, you can definitely get away with the Shadow DOM with a Slot Approach and the Attribute Approach.

However, if your content needs to be crawlable for all search engines, the Light DOM Approach is the best way to go. For example, at Red Hat, we get a pretty sizable amount of traffic from Bing. So we can’t just ignore that major portion of our traffic.

Just take a look at this picture that was taken at Pubcon, a social media and search engine optimization conference, in Las Vegas in October, 2019. As the title of the slide clearly states, Bing is bigger than you think.

PatternFly Elements and the Light DOM Approach

The approach we’ve taken with PatternFly Elements has primarily been the Light DOM Approach. Our components are meant to be used in both marketing and app contexts. Given that context, we need search engines to be able to crawl our content. If you look at pfe-accordion, or pfe-tabs, or pfe-modal, you’ll see how we put the content with its semantic value in the light DOM.

I’m not saying that our approach with PatternFly Elements is perfect, but treating content as king in our components and using light DOM helps us address the SEO issues I’ve presented here. And yes, pre-rendering tools like prerender.io can help, but I don’t think we can expect all of the sites and apps that use PatternFly Elements or other web components to use a service like prerender.io for crawlers that don’t execute JavaScript.

Wrap up

In the end, our SEO team was satisfied with the approach that we’ve been taking with PatternFly Elements and they’ve been able to verify that we have not negatively impacted the SEO performance of our pages and apps at Red Hat.

Whether you’re building or consuming web components in your pages and apps, the way the components are built can really impact SEO. Google and Ask.com are the only two crawlers that are currently executing JavaScript. All of the rest are just looking at the markup when the page loads. So remember that the context of your site or app and the search engines that you need to support can guide you in developing and consuming web components.

--

--