Understanding web performance metrics and tools

Olga Skurativska
SMG Real Estate Engineering Blog
4 min readMay 6, 2021
All the tools! (Photo by Cesar Carlevarino Aragon on Unsplash)

When you have a lot to improve, what should you focus on first?

We approached our page speed problem by trying to follow the insights from all the different tools we were using. Looking to be able to estimate the potential impact of the improvements and prioritize our work accordingly.

This post is a part of the series: read more about the performance problem Homegate was facing in the first post of this series: Optimizing for Core Web Vitals at Homegate: things we learned along the way

However, there were a few issues with the way we were measuring performance:

  • High variability of the Lighthouse score made the score useless for assessing the impact of our improvements
  • Core Web Vitals report in our Google Search Console was showing much worse Cumulative Layout Shift (CLS) from what we measured with Lighthouse — we didn’t know which measurements to trust
  • Different people used different tools for measuring performance and received very different insights

Some of us insisted we should be fixing our CLS first, others that we should reduce the size of our javascript bundle. We agreed on one thing: we needed a better understanding on how performance score is calculated and how to interpret the results from different measurement tools.

Internet offers tons of information on the subject. Combing through Google’s documentation on https://web.dev gave us the best overview.

Lighthouse, PageSpeed Insights and other tools

Google offers two ways of measuring page quality: in the lab and in the field. To get a good picture of how users experience our websites we need to keep an eye on both lab and field measurements.

When it comes to tooling, there are two major players:

  1. Lighthouse is a tool for testing performance in the lab.
  2. Chrome User Experience (CrUX) Report dataset contains the field data—the results of Real User Monitoring. This data is available for querying in the Public Google BigQuery project, is powering PageSpeed Insights and Google Search Console (Core Web Vitals report).

Lighthouse

  • Lighthouse runs in a simulated environment (sandboxed Chrome) with a certain network and CPU settings
  • Is run manually (using Chrome DevTools or the CLI)
  • Audits what happens on the page until the page is fully loaded

Chrome User Experience (CrUX) Report

  • Data is collected from the browsers and devices of the real users
  • Data is aggregated from Chrome users who have opted-in to syncing their browsing history, have not set up a Sync passphrase, and have usage statistic reporting enabled
  • Data is collected during the entire time of the interaction with the page
Performance metrics in Lighthouse and CrUX Report

Which metrics are the most important?

Each of the individual metrics reflects a different aspect of good user experience. That is why it is important to keep all of them healthy.

However, if one absolutely needs to prioritize, a Lighthouse Scoring Calculator might come in handy — it demonstrates the weights of each metric in the total score on a nice visualization.

Lighthouse Scoring Calculator gives an idea on how much weight each metric has in a total score

Measuring Core Web Vitals during development

Lighthouse gives us a good picture of what happens during the page load. But what to do if RUM data highlights issues that happen after page load? How can those issues be reproduced in the lab setting, so that we know what to fix?

For that purpose Google provides another tool — Web Vitals Chrome Extension, which will measure LCP, CLS and FID while you interact with the page in Chrome.

Core Web Vitals of non-Chrome users?

It is possible to use a JavaScript web-vitals library to collect real data from the users or your website and send it to some system — e.g. Google Analytics. This approach can give us more insights as Core Web Vitals will be measured for the non-Chrome users as well (with some limitations).

This is the second post of the “Optimizing for Core Web Vitals at Homegate” series. For the full story be sure to check other posts:

--

--

Olga Skurativska
SMG Real Estate Engineering Blog

A Principal Frontend engineer at Homegate that just won’t shut up about JavaScript, serverless, a11y, performance, UX, DX, design systems and tech leadership.