Optimizing Web UI Performance: A Deep Dive into Lazy Rendering and IntersectionObserver API

Martin Chaov
DraftKings Engineering
9 min readAug 16, 2023

--

Lazy rendering, is a design pattern in web development that delays the initialization or rendering of elements until they are needed. This technique is particularly useful for improving the performance and efficiency of web applications, especially those which need to present large amounts of content to their users.

Traditionally, web pages load all elements at once, which can lead to slower load times and unnecessary data usage. However, with lazy rendering, only what is necessary is loaded while the rest loads on-demand based on an user-controlled trigger such as scrolling. If utilized correctly, this technique could reduce initial load time, and save bandwidth and system resources.

Lazy rendering can be implemented using various techniques, including JavaScript events, IntersectionObserver API, and third-party libraries.

At DraftKings, we utilize lazy rendering in a few ways:

  • triggering download of data when user scrolls to a certain place and rendering data as soon as it is ready
  • quickly downloading data in the background but not rendering it until needed

Both techniques have their use cases and could mean the difference between sluggish application and a satisfying user experience.

The goals of lazy rendering are to:

  • improve perceived performance
  • reduce various resource utilization

This article is going to present a distilled solution using IntersectionObserver API and TypeScript.

What is IntersectionObserver?

“The Intersection Observer API provides a way to asynchronously observe changes in the intersection of a target element with an ancestor element or with a top-level document’s viewport.” — MDN

To simplify — the IntersectionObserver executes a callback function when a specified HTML element enters the viewport (“root” element in the example below).

const observer = new IntersectionObserver(
handleIntersection, // reference to callback function
{
root: htmlElementReference, // which element acts as viewport, null for document's viewport
rootMargin: "0px", // margin around the target to add to the size of the box before calculating visibility, defaults to zero
threshold: 1.0 // how much of the element should be visible before triggering the callback
}
);

observer.observe(htmlElement) // reference to HTML element to check for visibility against the viewport

The options object has additional intricacies which are not discussed in this article. For more details, check the IntersectionObserver parameters interface here.

Example Implementation

The following example takes into account these guidelines:

  • do it more than once — abstract away the observer logic
  • make it reasonably optimal — no overoptimization, include basic things like synchronization with the animation frame
  • have a manageable scrolling experience — use a placeholder element to keep the scrollbar position and size consistent
  • use one observer per viewport
  • register different handlers for different elements

If you want to jump to the code repo in GitHub and start tinkering, here is the example code. There is also a code pen linked further down in the article.

Renderer class

Abstracting away some of the logic in a reusable class.

/**
* IntersectionObserverInit
* Setup some reasonable defaults to make initialization straight forward
* @see https://developer.mozilla.org/en-US/docs/Web/API/IntersectionObserver#constructor
*/
const defaultsOptions: IntersectionObserverInit = {
// root: null, // null === document's viewport
rootMargin: "0px", // assume the box doesn't need margins
threshold: .01 // assume the box should show as soon as the first pixels should become visible
};

/**
* VisibilityChangeCb
* Export a type that will make it easier for implementers to work with callbacks
* @see https://developer.mozilla.org/en-US/docs/Web/API/IntersectionObserverEntry
*/
export type VisibilityChangeCb = (e: IntersectionObserverEntry) => void

/**
* Renderer
* This class should be instantiated for every viewport that we would like to control.
* There could be multiple instances for one viewport. As with everything, performance impact
* should be measured.
* @example
const observer = new Renderer()
observer.registerDom(
htmlElementReference,
intersection => entry.isIntersecting ? "Yes" : "No"
)
*/
export class Renderer {

private observer: IntersectionObserver // observer instance
private animationFrameID: number = -1 // animation frame id for cleaning
private callBacks: Map<Element, VisibilityChangeCb> = new Map() // store all callbacks

/**
* Constructor
* @param {IntersectionObserverInit} props default options for the IntersectionObserver constructor
* @returns {Renderer} renderer instance
*/
constructor(props: IntersectionObserverInit = {}) {

this.observer = new IntersectionObserver(
this.scheduleUpdate,
{
...defaultsOptions, ...props // merge defaults with user provided options
}
);
}

/**
* registerDom
* Start observing the specified element and store its callback for later use
* @param {HTMLElement} element reference to HTML element
* @param {VisibilityChangeCb} callback function that takes IntersectionObserverEntry as parameter
* @void
*/
registerDom(element: Element, callback: VisibilityChangeCb) {
this.callBacks.set(element, callback)
this.observer.observe(element)
}

/**
* unRegisterDom
* Remove element from observe list and delete theh reference to its callback
* @param {HTMLElement} element reference to HTML element
* @void
*/
unRegisterDom(element: Element) {
this.callBacks.delete(element)
this.observer.unobserve(element)
}

/**
* Schedule render at the next convenient moment
* @param {IntersectionObserverEntry[]} entries automatically passed by the IntersectionObserver upon invocation
* @void
*/
private scheduleUpdate = (entries: IntersectionObserverEntry[]) => {

window.cancelAnimationFrame(this.animationFrameID) // drop the previous frame if it was not rendered

this.animationFrameID = window.requestAnimationFrame(() => { // setup next animation frame
this.updateVisibility(entries) // execute the callbacks of each registered element
})

}

/**
* updateVisibility
* The callback function invoked when the browser schedules an animation frame
* @param {IntersectionObserverEntry[]} entries all observable entries that were registered
* @void
*/
private updateVisibility = (entries: IntersectionObserverEntry[]): void => {
entries.forEach(e => {
const callback = this.callBacks.get(e.target)
if (typeof callback === "function") {
callback(e)
}
})
}
}

Using this class would be quite simple.

import { Renderer } from "./renderer"

/**
* intersectionHandler
* Function is called each time the viewport detetcts a change
* @param {IntersectionObserverEntry} entry
* @void
*/
function intersectionHandler(entry: IntersectionObserverEntry) { // takes an entry and processes it
if (entry.isIntersecting) { // do something if content is visible
entry.target.classList.add("isVisible")
} else { // do something if content is not visible
entry.target.classList.remove("isVisible")
}
}

const observer = new Renderer() // bootstrap an observer

const boxes = document.querySelectorAll(".box") // get all of the boxes we wish to observe

boxes.forEach(x => observer.registerDom(x, intersectionHandler)) // register all of the boxes with the observer

Below is a working demo that is also adding and removing content based on visibility.

Best practices

In our applications at DraftKings, we often have enormous amounts of content to present to our customers. Especially in our Sportsbook, where we also utilize push updates, the content should be up to date to enable users to make informed decisions. A phone screen or even desktop screen cannot fit this amount of information in a comprehensible way, so the users end up seeing one screen worth of data at a time. While there is content above and below the point where the user has scrolled, all updates for that content are not relevant until they are presented on the screen, so they could wait for a more appropriate time to be propagated to the Document Object Model (DOM).

Below are some thoughts to consider when designing and implementing lazy rendering.

Batching content

Batching refers to the practice of loading multiple elements at once, rather than individually. There are different types of batching in relation to lazy rendering:

  • batching when downloading content — for example, pulling the next page when user scrolls near the bottom of the current one
  • batching when rendering content — proactively adding or removing content from the DOM to optimize the performance of a page by reducing the amount of nodes the browser has to keep in memory

Batching when downloading tends to be more efficient than downloading single entities for both the client and the backend, but also in terms of network utilization. Especially when using HTTP, a lot of the payload when downloading small amounts of data tends to be HTTP headers. In that case, it becomes more efficient to download content in batches (or pages, if you will).

Batching when rendering is a different type of a beast. Rendering a lot of individual elements all the time could end up increasing the number of reflows and repaints which leads to layout thrashing. Batching is a balancing act, as loading too many elements at once can also slow down the rendering of the application.

Finding the sweet spot between how much and when it should be rendered can be tricky. One way to manage this is by using the requestAnimationFrame API and making sure frames are executed within a certain time threshold.

Lazy rendering has design implications on the backend services as well. Instead of getting everything with one call, content is being pulled based on some type of user interaction. The backend should be tailored to enable such implementations by providing a reasonable low latency API with variable pagination support and content filtering capabilities. Moreover, depending on the use case, the backend should also provide metadata such as: amount of available content, rate limit for number of calls, counters for different types of content, and others.

Testing

Testing lazy rendering can be quite challenging. Apart from simply testing to see if content generates, UX should be tested to make sure content is presented at the right time to ensure a smooth experience. This may include testing how the application behaves in different network conditions, device types, etc. Tests should inform the engineers if there should be an adjustment to the user experience based on the conditions. Moreover, they should be designed to provide meaningful feedback beyond the basic pass/fail. Some points to consider:

  • Testing edge cases such as rapid scrolling up/down or sudden changes in network conditions to stress the lazy rendering implementation.
  • Tracking metrics such as CPU and memory usage, number of calls to the backend, download times, number of DOM elements, backend load.
  • How lazy rendering affects perceived performance of the application? Is there any scroll jumping or other disruptive behaviors?

IntersectionObserver works directly with the DOM and since it is hard to mock for unit tests, E2E tests would be required to validate its use. A work-around for rendering DOM in memory would be to substitute the real IntersectionObserver with a mock implementation that is always returning visible elements, thus you could verify what is being generated. However, this approach leaves showing and hiding untested and waiting for a manual test or additional automation created specifically for it.

Avoid scroll jumping

Depending on the use-case, scroll jumping might not be 100% avoidable, but there are definitely ways to lessen its impact. Scroll jumping can occur when elements are loaded in as the user scrolls, thereby changing the page length, thus forcing the browser to re-calculate the scroll size and position. This can be disorienting for the users. To improve this, placeholders could be used to reserve space for elements before they’re loaded. From our experience, placeholders work best when they can expand with the “min-height” CSS property set to a reasonable number, rather than fixed height or auto. Having zero height placeholders would just trigger all content to be loaded as soon as the page becomes operational.

Handling re-renders

Re-renders could be costly in terms of performance. Techniques like memorization could help reduce them by sacrificing some memory to unload the CPU. While one of the goals of lazy rendering is to reduce resource utilization, this technique still has some cost associated with it. One way to keep things in check is to remove elements from the DOM but keep them referenced so that the garbage collector won’t delete them, and insert them back when appropriate. This added complexity could be too much for some use-cases though.

Performance

IntersectionObserver is just an enabler and depending on how it is being implemented and used it could improve things quite significantly. However, this is also a tradeoff → adding complexity to the code to improve the users’ perception and resource allocation. IntersectionObserver could be used together with list virtualization (a.k.a. windowing) and DOM reuse to drop the number of elements on the page to the bare minimum, 1.5–2 screens worth of content. Batching could also significantly improve but might also hurt performance if not appropriate for the use case.

Downsides of lazy rendering

  1. Complexity — implementing lazy rendering in any shape or form (even someone else’s NPM package) is adding complexity to the codebase and testing process. It requires careful management of state and timing, and might not be fully compatible with all libraries and frameworks.
  2. SEO impact — search engines may not be able to fully render the content that is loaded lazily, especially if it is triggered only by user interaction.
  3. Searching issues — similarly to how search engines cannot search content that is not on the page, users who try to do it with “ctrl+f” will face the same challenges. Engineers would have to add additional searching capability to their web sites so that users could find what they need.
  4. Initial load times — lazy rendering could improve overall performance. However, it could also increase initial loading times, as the browser has to download and execute the lazy rendering logic before it could start showing content.
  5. Increased complexity in managing backend calls — since content is loaded piece by piece and triggered by user interaction it could be challenging to discover potential use-cases and issues coming from them before releasing it to a wider audience.
  6. Libraries and frameworks compatibility — modern JS frameworks hide a lot of the complexity of managing the DOM by abstracting away the updates and rendering from the engineer. IntersectionObserver is used to do DOM level optimizations and requires direct access to the references of the DOM elements. Moreover, to pull it off requires intimate knowledge of both the DOM and the library in question.

Want to learn more about DraftKings’ global Engineering team and culture? Check out our Engineer Spotlights and current openings!

References

--

--

Martin Chaov
DraftKings Engineering

15+ years as a software architect, currently Lead Software Architect at DraftKings, specializing in large-scale systems and award-winning iGaming software.