UPDATE: This story is from 2018. Much has probably changed since then and IntersectionObserver is for the most part not the recommended method of lazy loading images.
Lazy Loading Images with IntersectionObserver
For most websites nowadays, images are the primary bottlenecks to site performance.
Especially for eCommerce websites, images make up the majority of the page content and tend to be large in size, usually totaling to multiple megabytes of image data over the network per page.
Take, for example, the home department landing page at Walmart Labs:
And here’s a summary of the amount of images being loaded:
137 images! More than 80% of the data over the wire is for images. 😬
Now take a look at snippet of the network request waterfall:
In our specific case, the code split module code is loading much later because it needs the main bundle cp_ny.bundle
first. However, that main bundle could have arrived much faster if there weren’t 18 images competing for bandwidth.
Okay, okay, okay. So how can we fix this? Well, you can’t really “fix” it, but there are a lot of things you can do to optimize how images are loaded on your site. Among many different types of optimizations such as different formats, compression, blur animation, CDNs, etc., I’ll be covering “lazy loading”. I’ll be covering how to implement lazy loaded images specifically using React, but as long as you are using JavaScript, the implementation is (essentially) the same.
Take for example this ultra simplistic React Image
component:
class Image extends PureComponent {
render() {
const { src } = this.props;
return <img src={src} />;
}
}
All it does is take in a src
URL as a prop and use it to render an HTML img
element.
JSFiddle
Here are the basic steps to making this image component lazy loaded:
1) Initially render no image source.
2) Set up detection for when image intersects with viewport.
3) Render image source when we detect that image will be in view.
Step 1 — Initially render no image
render() {
return <img />;
}
Step 2 — Set up detection
componentDidMount() {
this.observer = new IntersectionObserver(() => {
// Step 3
},
{
root: document.querySelector(".container")
}); this.observer.observe(this.element);
}....render() {
return <img ref={el => this.element = el} />;
}
What did I do here?
1) I added a ref to the img
element so that we can update the src
URL later without causing a re-render.
2) I create a new instance of the IntersectionObserver
(explained later).
3) I tell the observer to “observe” my image element using observe(this.element)
.
What is IntersectionObserver
?
It’s exactly what it sounds like. Here is a quick summary from MDN:
The Intersection Observer API provides a way to asynchronously observe changes in the intersection of a target element with an ancestor element or with a top-level document’s viewport.
It may seem daunting at first, but they’ve actually made the API extremely intuitive. An instance of IntersectionObserver
is passed in a few options. The one we used was root
; this just defines the DOM element that we will consider the bounding container: the container we want to check if our image has crossed paths with. It defaults to the visible viewport but I explicitly set it to a container within the JSFiddle iframe because there is a feature I will explain later that wasn’t designed for use within iframes.
The reason why IntersectionObserver
is the more popular method for visibility detection over more traditional methods like onScroll
+ getBoundingClientRect()
is because the actual detection implementation doesn’t run on the main thread. However, the callback for when an intersection has been triggered does run on the main thread so keep it light!
Step 3 — Render image!
Now we need to set up the callback for when an intersection has been triggered between the root
and target
element. In our case, they are the .container
div and this.element
ref respectively.
....this.observer = new IntersectionObserver(
entries => {
entries.forEach(entry => {
const { isIntersecting } = entry; if (isIntersecting) {
this.element.src = this.props.src;
this.observer = this.observer.disconnect();
}
});
},
{
root: document.querySelector(".container")
}
);....
The callback for when a intersection is made passes back an array of entries
which are kind of like snapshots of all the target elements that have triggered an intersection. The isIntersecting
signifies the direction of the intersection. It’s true
if the target is moving in to the root element. It’s false
if the target is moving out of the root element.
So, when I detect that the image element is intersecting with the bottom edge of the container, I manually set the image src
and clean up the no longer needed observer.
(Secret) Step 4 — See result & do the happy dance
Hold on… did you notice something in your result?
Let me speed up the scroll and throttle the network speed for you:
Since we’re only loading the image when the user has already reached the point where they should be seeing the image, the user is unable to scroll down and see the image until it has downloaded. Usually, this isn’t an issue with desktop machines with fast internet, but a lot of consumers surf on their phones nowadays, and sometimes, they’re stuck with 3G or worse… EDGE. 😱
Thankfully, the IntersectionObserver
API offers the ability to grow or shrink the detection boundaries of the root element (our .container
element).
All we need to do is add one line of code under where we put the option to specify a root container:
rootMargin: "0px 0px 200px 0px"
The rootMargin
option takes in a string that conforms to the regular CSS margin rule. In our case, we are telling it to increase the bottom detection boundary by 200px
. What this means is that the intersection callback will be triggered when the bottom of the root element plus 200px
happens (default margin is 0
).
Nice! So even when we’ve only scrolled to the 4th last content line, the image has loaded 200px
below the screen.
But wait!
For those of you inspecting the GIFs closely, you’ll notice that the scrollbar jumps when the image is loaded. Fortunately, that’s easy to fix. The issue is that the image element which was initially 0
height is now jumping to 300px
. All you need to do is set a fixed height by adding the height={300}
attribute to the image.
So what kind of performance benefits did we see with our home department landing page Walmart Labs? The performance benefit varies wildly depending on network speed, CDN availability, number of images on the page, their intersection rules, etc. In other words, you’re better off implementing this yourself into your app and find out the actual benefits.
If you’re still curious about the benefits we’ve seen at Walmart Labs, our internal synthetic slow 3G tests on pre-prod environments showed up to 32% decrease in load time, up to 22% decrease in speed index, and up to 17% decrease in above the fold times.
Thanks for reading! 😊