Optimising Core Web Vitals at Carousell

Shubham Desale
Carousell Insider
Published in
6 min readFeb 8, 2022

In an era of short attention spans, it is common for users to abandon websites that take more than 3–4 seconds to load. Numerous case studies by leading e-commerce companies show how faster loading speeds directly correlate to good UX and better user retention. Considering the multitude of wins for both users and the organisation, we decided to embark on the performance journey and we improved each of the core web vitals at least by 40%.

Identifying opportunities

We started with identifying the gaps. Google Search Console looked like a good starting point because it provided RUM (Real User Metrics) from users across various geographies. It also didn’t require any additional setup, unlike other RUM tools in the market. It showed us insights using CWV (Core Web Vitals) — namely LCP, FID and CLS. We had scope to improve all of these metrics.

We tried out Google’s Lighthouse tool to measure the performance score in development but soon realised that results varied a lot across local runs. We created an automation script using Lighthouse CLI that ran the tests in headless Chrome multiple times on a given URL, and updated them in a Google Sheet so that we can use the average values of CWVs to test any improvements locally.

Lighthouse provided excellent suggestions that helped us identify low-hanging fruits such as adding resource hints, optimising image loading and reducing third party scripts.

We reckoned every page type has its own quirks and needs to be optimised separately with different techniques. We focused on the Homepage and Product pages to begin with because these pages received higher traffic than the rest of the pages on the website.

LCP (Largest Contentful Paint) optimisation

Initial LCP for homepage — 6.8 seconds

Our homepage was entirely rendered on the client side. After some basic experimentation, we reckoned SSR (Server Side Rendering) would improve the LCP because the largest image element would be available right away for the browser to download, instead of waiting for the browser to render the DOM and then load the largest element.

But wait… SSR should be used mindfully to reap the maximum benefits!

When we configure a page for complete SSR, there are two additional steps that happen on server:

  1. Fetching required data for ALL the sections on the page, including below-the-fold components (mostly via API calls)
  2. Converting the React components to their HTML markup

When we deconstructed Step 1 to only make those API calls that are required for the user to view above-the-fold components visible in the viewport, it helped us reduce the time and cost required for SSR. All the below-the-fold components can be loaded lazily as the user scrolls down to them. We can trigger the lazy-loading in advance using a scrolling offset so the components are ready before users scroll them into the viewport.

The above ‘selective SSR’ approach works well for users but what about crawlers? Crawlers can’t scroll down the page to trigger the lazy-loading of the components so we might commit a blunder on the SEO side since all the components will not be ready for crawling! The solution is to use user-agent sniffing on the server and render a full HTML page for crawlers.

Homepage LCP Before:

Homepage LCP After:

Result — homepage LCP was reduced by 65%. Further optimisation would be achieved by reducing the size of the LCP image so that it loads quicker.

Similarly for product detail pages, we implemented SSR to reduce the loading time for the product image. While doing this, we had an interesting insight on how best practices should be used mindfully to avoid unexpected performance degradation!

Our image components were set to lazy-load images using JS when they are visible within the viewport. Even with SSR, the image component used to render a placeholder first and then JS would load the actual product image. This approach had a dependency on JS to be loaded before it could lazy-load the image. We removed lazy-loading of product images to make the image readily available in the browser.

Apart from SSR, we also added a preloading to the product image, so it is discovered earlier within the DOM. This resulted in an easy ~200ms win in LCP.

FID (First Input Delay) optimisation

We have an excellent dynamic UI system to render components based on backend configuration. Using Chrome’s coverage tool, we noticed a few components that weren’t supposed to be included in JS bundles because they were not present in the backend configuration. We lazy-loaded all of those components based on scrolling position and it helped us reduce the initial JS size by over 8%.

We have a number of third-party scripts in GTM that are used by marketing teams. We went through all of the tags to ensure that there are no unused scripts being inserted. We also made sure all of the scripts have correct ‘triggers’ set on them, so they are loaded only when certain conditions are met. We shaved off around 25kB of GTM tag configuration after this exercise.

To ensure that we don’t end up adding GTM tags that incur performance penalty, we have set up a review system where all the GTM changes are previewed by developers and profiled for performance implications before they are used in production. This required us to get marketing stakeholders onboarded and communicate the importance of the web performance from a user retention point of view.

After this whole exercise, we shaved off ~300ms from FID!

CLS (Cumulative Layout Shift) optimisation

We have ads displayed across several page types. Some of them are lazy-loaded based on the scroll position of the user. When a user scrolls to them, the loaded ad pushes the search results down which results in layout shifts. To fix this issue, we added a placeholder where the ad is being loaded. After it loads, it simply replaces the placeholder, thus avoiding any layout shift.

We also had layout shift issues on Product pages. There are no restrictions on what the aspect ratio of product images should be, so we couldn’t predict how much space is to be reserved before the image was actually loaded. To solve this issue, we simply restricted the aspect ratio of the initial image to a square. This had a huge impact on our CLS scores in Google search console, where page views with “Good” CLS improved from 41% to 74%.

Before

After

After all of these improvements, we were able to reach a “Good” level in performance in some markets:

Want to work on exciting problems related to web performance? Check out our careers site to find out more about available roles.

--

--