EXPEDIA GROUP TECHNOLOGY — SOFTWARE

Improving Vrbo Homepage Loading Experience

It’s not always easy to improve Performance and User Experience at the same time

Fritz Abrante
Expedia Group Technology

--

Photo by Christina Morillo. Used by permission.

A couple of years ago the web platform at Vrbo™️ (part of Expedia Group™️) went through a complete re-architecture, since then the way that the homepage has loaded its modules has been the same. We had some rich SEO content that needs to be Server-Side Rendered and some personalized content that loaded asynchronously from the Client-Side.

This was acceptable as long as we didn’t have a lot of content to show. But in the past year or so, we have been growing the personalized content we present to our customers like Improved Search History, Continue Your Booking, Your Upcoming Trip, and lots of Recommendations modules that can help customers book the perfect holiday.

All of these new modules were added to the page and they load asynchronously. Each one of them taking their own space on the page when there’s data available, and their own time to load. We ended up having a page with some modules jumping around, appearing when the call was resolved and in no particular order.

We ended up having 2 problems

  1. Bad User Experience
  • When landing on the homepage first, the user would see a page with most of the content rendered by the server.
  • In the case of a returning user, the personalized content based on its activity would start loading.
  • Personalized modules started to appear pushing the current content of the page up or down depending on the position of the module on the page and the speed for that module to load.
  • All personalized modules were being requested for a returning user thus causing lots of network traffic, even when the module wasn’t visible to the user. Since it was located far down on the page and the user had no intention to scroll down.

2. Performance Issues

  • There was no clear differentiation between real users and bots (Crawlers) since we served the same content for both of them.
  • No need to completely serve all of the page content to real users if they only wanted to perform a search on the Search Box.
  • Lots of calls were made from the clients to request modules that users might not ever see.
  • Lots of requests were received on our servers for content that users might not ever see.

How did we solve all these problems?

The team decided to implement two things:

1. Dynamic Rendering

With dynamic rendering, we can identify if a user is a bot (Crawler) or a real user so we can present the content in different ways.

For bots, we will present a page rendered from the Server with content relevant for them and SEO. And for what we consider real users we will serve a very lightweight page including only the content to be rendered above the fold, in our case just the Header and the Searchbox.
Then load the rest of the content, SEO modules and personalized modules, asynchronously from the Client-Side.

2. Module Loading Strategy

With the module loading strategy, we will reserve a fixed blank space for each module, so it gets loaded only when a user sees it. We request the module content when the user is about to see the module to provide a smoother experience.

As users scroll down the page, each module will request its content either filling that blank space with a smooth fade-in animation when there is valid data to be displayed, or collapsing the blank space with a smooth height animation when there is no valid data to show. Scrolling up the page is also supported.

What are the benefits of this approach?

  1. For real users
  • The first request is very lean, with fewer operations performed on the server, less JavaScript executed on page load, less HTML/content served and parsed, resulting in a faster page load and execution times.
  • Nicer and smoother loading experience with no unexpected “content jumps” and requesting only the modules that the user wants to see and not all the content if the user intention was to never scroll down to the bottom of the page.
  • Less data transfer for mobile users, bytes are precious.
  • Improvement on Primary Action Render: ~20% improvement on P50 (P50 and P90 reference to Percentiles. P50 or median of the metric means 50% of the events measured while P90 means the 10% worst events measured) and ~15% improvement on P90 (PAR is a custom Vrbo metric that in our case measures when our Search Box is ready to be used)
Internal EG tool for displaying our custom metric PAR
Internal EG tool for displaying our custom metric PAR

2. For bots/crawlers/Google

There was not much of a difference here because we don’t want to mess up with SEO.

  • There was improvement of the new Google Web Vitals since we reduced the jumpiness (Cumulative Layout Shift) and the First Input Delay (FID). This metric will be taken into consideration in the not-so-distant future by Google for SEO ranking.
  • Improvement on CLS (Cumulative Layout Shift): ~80% improvement on P50 and ~50% improvement on P90
Internal EG tool for displaying RUM metrics for CLS
Internal EG tool for displaying RUM metrics for CLS
  • Improvement on FID: ~60% improvement on P50 and ~50% improvement on P95
Internal EG tool for displaying RUM metrics for FID
Internal EG tool for displaying RUM metrics for FID

For us, Vrbo, on the Server-Side

  • Fewer unnecessary calls to the server since users will request the data when they need it, not before (~20% fewer calls)
Datadog Request per second
Datadog Request per second
  • Less CPU load since we will not Server Side Render the whole page for all requests. Fewer operations performed. (~15% less CPU)
Datadog CPU usage per instance
Datadog CPU usage per instance
  • Less latency
Datadog Max Latency
Datadog Max Latency

Luckily, we were able to make these changes and run this as an A/B test to validate that we were not affecting the way our users perceive the page and the overall experience.

Hope this was a fun read.

--

--