Goodbye iframes

or: How I learnt to Stop Worrying and Love the Shadow Dom

Toby Cox
BBC Product & Technology
9 min readJul 24, 2019

--

Recently we, in the BBC’s Visual Journalism department, have made the switch from iframes to Shadow Dom. It took us six months from concept to production but the results make it worthwhile: our products load more than 25% faster and feel more interactive. But why were we using iframes in the first place, why didn’t we just use Web Components, and what lessons can I impart to you, dear reader?

The content that the Visual Journalism department produces is added to pages that we don’t control, pages that are built by a scale enterprise CMS and publishing system and maintained by other teams. Each of our projects is bespoke and compiled on multiple different endpoints for multiple use cases. When another department makes changes to something on the front end(CSS,HTML or JS) we cannot risk it breaking all of our old content. So sandboxing our content has made sense for the longest time — it protects our content from changes beyond our control.

And, reciprocally, it protects the rest of the BBC from our changes, so we can’t bring down the article pages if we do something a little careless. I prefer this level of risk.

Syndication is also easier with our iframes. Third parties can ingest our content without us losing control.

The downsides of iframes

However, there were many drawbacks to using iframes as the de facto method of distributing our content to the different internal endpoints. The most obvious would be speed.

When we set up an iframe, we need to send messages from the iframe to the parent/host page to communicate events backwards and forwards: if the user resizes their window, we need the parent to tell the iframe; if the iframe changes its dimensions by loading in extra content, the iframe needs to inform the parent; if a user clicks a button, we need to inform the parent so we can track this data to measure our project efficiency etc. Each of these messages takes time and when we are trying to increase front end performance, eliminating these messages would be a big boost.

With iframes, the iframe itself is a separate document — so the page has to download and render the iframe document, which itself requests assets such as JavaScript and CSS. Consequently, the iframe initialisation is much slower than the shadow DOM, where all of the requests are already ‘in the page’, so to speak: The Shadow DOM is one of the four Web Component standards that can deliver content in a similar way to iframes in terms of encapsulation, but without the negative overheads.

Iframes are inherently unresponsive, so we have to set up a postMessage-driven mechanism of manually keeping the height up to date. This comes with library bloat andcomputationaloverhead.

Iframes are hard to write for: CORS errors are common, as iframes are locked down and things like hooking into scroll events need to be ‘normalised’ before forwarding to the iframe, to take into account the iframe offset.

Iframes require a ‘title’, otherwise screen readers announce them simply as ‘iframe’. Shadow DOM has no such requirement as it’s ‘native’ to the page.

Then there is the matter of SEO.

Iframed content is dealt with, quite rightly, as a separate domain and indexed accordingly. We serve Visual Journalism content(along with much of the BBC’s static content) on a separate cookieless domain through a CDN: bbci.co.uk. When an iframe requested by a bbc.co.uk page is served from bbci.co.uk it is not considered “part of” bbc.co.uk by a search engine. Even though our bbc.co.uk page visually includes content from that domain and it is our own content, in the eyes of a search engine spider it doesn’t “belong to” bbc.co.uk. Instead, it gets indexed under bbci.co.uk.

It is generally a bad thing for us that content produced by the BBC is not considered part of the bbc.co.uk page it is embedded on: we would like any BBC authored content to be searchable regardless of where it originates. We deal with this by having core content accessible to the search engines. This means that we embed the nub of the interactive piece as flat HTML that screen readers, older browsers and search engines can access.

However, search engines have improved so that they now both index pages and also execute javascript(JS).This means that Google now fires our JS which in turn hides/deletes the core content from the parent page and loads the iframe in its place.

Therefore we are not altogether sure that our core content does much besides giving a viable experience to users who have JS turned off, or those whose machines fail to cut the mustard for our enhanced experience.

Therefore, all of our iframed content is treated as its own site with no benefit to the parent page / domain. You can verify this by searching for ‘“Howpopular is my name?” site:bbc.co.uk’ on Google, vs DuckDuckGo and Bing. Google, which parses our JS cannot see this h2 tag on any page on bbc.co.uk. DuckDuckGo and Bing see our inline core content and it isn’t removed by firing JS.

It would be much better if we could encapsulate our content without these overheads and drawbacks.

Research possible options for replacing iframes

We had a feeling that Web Components would be a way of encapsulating our content, so we set about investigating.

Initially, we considered using web components for encapsulation. This would have involved using custom elements. However, it quickly became apparent that the main benefits of web components for our use case are really provided by shadow DOM. We want encapsulation of an element whose content will appear seamlessly as part of the page. Shadow DOM gives us that without any need for a custom element. The encapsulation is not as absolute as it is with an iframe because styles from the light DOM can affect elements within the shadow DOM. But we decided this level of encapsulation is sufficient for our needs, and custom elements do not appear to offer stronger encapsulation anyway.

Custom elements might be an attractive way forward for the Visual Journalism (VJ) components library at some point. Each component we maintain could be used in our content via a tag just for that component(for example <news-vj-responsive-image> or <news-vj-image-slider>). However, we realised that, at the level of projects, giving each one its own tag doesn’t make much sense. We’d be inventing tags that would likely only ever get used once and which are only used for encapsulation. For this use case, custom elements would have been redundant.

The feeling of the shadow DOM was much more robust than iframes: it felt as if the content loaded instantly and interacted natively with its surroundings. But when we conducted benchmarking tests with puppeteer, we were getting an improvement in time to first meaningful paint of just 11%.

Despite this, it just felt so right that we knew we had to continue despite the rocky road ahead. The perceived speed improvement was so vast, so impressive that we had to drive on. We knew that perceived speed is often more important to users than measurable performance gains.

The technical challenges of using iframes

We were quite excited at this point, as we had a basic prototype up and running. So how long was it going to take to get this solution production ready? I mean: how many challenges could we possibly find on our journey?

In the development phase, we faced and overcame the following: a big mess of polyfills, rewriting all our code and dependencies to avoid using document.querySelector(), not being able to use media queries to determine the width of our content on a page, leakage in and leakage out, and a few JS API quirks.

If you are considering the move to Shadow DOM, you need to look out for the same pitfalls:

  • Polyfills - we considered webcomponents-lite.js, webcomponents-sd-ce.js bundle(shadyDOM/CSSand custom elements only), webcomponents-hi-sd-ce.js bundle(HTMLimports, shadyDOM/CSS and custom elements), shadyDOM/CSSshims, and webcomponents-loader.js. For our purposes none of these worked well enough to continue into production: There was leakage of CSS and JS. Either just leakage in, just leakage out, or both. For our purposes, since we would be maintaining the iframe format for syndication, we made a decision to serve the iframe version to unsupported browsers: those that do not support document.head.attachShadow.
  • document.querySelector - The Shadow DOM doesn’t have access to document so document.querySelector() and document.querySelectorAll() do not work. Therefore we had to rewrite all of our in-house vanilla JS components to initialise with a scope (defaulting to document) that would take the shadow root returned from attachShadow. This was a little painful and also means we have to be careful when using external libraries.
  • media queries - with our content being used on mobile, desktop pages with a right hand column, desktop pages without a r/h side and full-width full-bleed pages, we need to be aware of the size of our content and enable media queries where applicable. Since our content is hosted by others who are in charge of the breakpoints, we have, in the past, used media queries to determine the width of our content in relationship to the page. With iframes, media query would give us the width of our content, with shadow dom media queries give us the width of the device itself. This is a huge challenge for us. We now have no way of knowing how big our content is when it’s served and have to ensure that our content is so pliable that it is perfect at every pixel width, rather than 3 or 4 widths we were used to. The good news is that there is a solution for our problem in element queries. The bad news is that no browser supports them yet.
  • Leakage - By nature there is almost zero CSS leakage, but there is a modicum of JS leakage when using, for instance, indentical libraries. We could counter this with namespacing and explicit context, so it became a non-issue. Leakage with polyfills, however, was excessive to the point of making the Shadow Dom unusable. We thought that closed mode for Shadow Dom might have been the solution, but it turns out that it doesn’t work how one might have thought, as Leon Revill explains here.
  • getBoundingClientRect - getBoundingClientRect().top works differently inside the Shadow DOM. You get the position relative to the scroll, as if the element was part of the page(if it’s in the iframe you get the offset compared to the top of the page). Also on Firefox we have seen scroll return a string rather than a number, which is confusing. We have built in workarounds for both.
  • Anchor points - window.location.hash = ‘#example’ - doesn’t jump the page down as expected, and doesn’t then change the focus (so unable to create a ’skiplink’, for example).
  • React can’t handle events within the shadow DOM - so click, submit, hover etc will not work with React.

The results

The results in the real world were much better than our benchmarks had told us. As an example, let’s look at two projects that are ostensibly the same.

https://www.bbc.co.uk/news/uk-england-41160596

https://www.bbc.co.uk/news/uk-england-45559619

The difference is the second one is using Shadow Dom and the first is using an iframe. The second version even uses more data and D3, but still loads quicker. And the fact it appears to load faster is even more important. The perception is of speed: On a fast connection the difference is just 0.4s.

Using real user data, we have found that on our first project 95% of users loaded the content in under four seconds vs 77% of iframe users. In a larger project, such as this mapping of affordable rents project the difference was starker — 92% vs 46%.

We are going to make our data collection more granular — before the Shadow Dom we were trying to isolate which users were taking an unacceptable amount of time to load our content, whereas now we want to see the spread of performance more. What percentage of users load in under a second? What percentage in under 300ms?

And it’s not just speed. The baby names project can now flow over content in a way that feels normal to the user(e.g.position:absolute lets content flow over the main contents as if it were part of it). No longer do we have to add to the height of the iframe in order to accommodate extra options.

Conclusion

I wonder if many users are in a similar position to ours — requiring encapsulation on your own site. However, if you are in this position, then Shadow Dom is going to be a clear winner.

If you use iframes to host your content now(perhaps syndicating onto other sites) then Shadow Dom may be for you. Twitter used it to change their “sharethis tweet” feature recently — they changed it from iframes to shadow Dom, primarily for the speed boosts we have seen. They will also be getting SEO boosts from doing this, which the BBC may not.

When Twitter embed tweets in a third party site, they typically have at least three links included therein. When this code was in an iframe, search engines would see this as a twitter.com domain linking to a twitter.com domain — internal linking. With the shadow Dom method of embedding it appears that scores of sites are linking directly to twitter.com — external linking.

Our current content would give a boost to our syndication partner and possibly count as duplicate content for us, so there may be resistance to transforming all our iframes into Shadow DOM, but this experiment has been extremely beneficial for the vast majority of our work. We hope you enjoy the changes.

--

--

Toby Cox
BBC Product & Technology

Team lead for BBC Visual Journalism. Formerly Senior Developer at Netro42.