Core Web Vitals Iterative Measurement using Lighthouse

Mohammad Hasan Muktasyim Billah
Blibli.com Tech Blog
6 min readJul 1, 2022

Core Web Vitals(CWV) are metrics initiated by Google to become quality guidance for a great experience on the website. In 2020, the core web vitals metrics are Cumulative Layout Shift (CLS), Large Contentful Pain (LCP), and First Input Delay (FID).

When we want to make improvements related to Core Web Vitals, we need to know the CWV score of our website, before and after the improvement is made. This is necessary to test whether the improvements we made really have an impact on increasing the CWV value or not.

read my other article about Improving Website Cumulative Layout Shift (CLS) Metrics

The measurement of core web vitals can be done using several tools such as Lighthouse or Pagespeed insight. I personally prefer to use Lighthouse as a lab measurement tool. The reason is that lighthouse can be run using our local device, so it can take measurements on non-public websites that are still in the development stage. Apart from being automatically installed on google chrome, Google provides us with the option to run lighthouse on the CLI or use the node module.

To run Lighthouse on google chrome, we just need to go to Chrome DevTools, select the Lighthouse panel, and hit “Generate report”.

after the measurement is complete, the lighthouse will display the result value of each metric.

Why do we need to do the iterative measurement?

I personally do not recommend using the results of just one test as a benchmark in determining whether some improvements are really effective or not. Because it is possible that when we run the measurement again, we will get different results due to certain factors such as unstable internet speed.

For example, the image above is the result of measuring the same website page when running Lighthouse for the second time. It can be seen that there are differences in the values of some metrics even though there is no change in the source code.

Because of that, it would be better if we carry out measurements with several iterations and calculate the average, percentile, minimum, and maximum values of all the results obtained. Then we can compare these values to determine whether an improvement is effective or not.

In this article, we will create a simple script to perform iterative measurements using the lighthouse module.

Launch Chrome and Run the Lighthouse

Before starting, make sure you have installed node js and chrome on our device. You can download node js at this page.

First we need to create a new folder and run the npm init command in it. Continue the initialization process until it is complete and there is a package.json file created. Then Install lighthouse dependency npm install lighthouse --save-dev . Create a new javascript file, in this case I will give the file name lighthouse-iterative.js.

in the code above, the launchChromeAndRunLighthouse function will run chrome at a certain URL followed by running lighthouse to perform measurements on the performance metrics. If you need to measure all categories you can remove the onlyCategories option.

If we run the code using the node lighthouse-iterative.js command, then a new chrome window will be opened and there will be a measurement process on the https://seller-api.blibli.com/. The result of the measurement will be displayed in the log.

We have successfully run measurements using the lighthouse. However, it can be seen that lighthouse measurements are automatically carried out using a mobile screen simulator with default settings. If we want to take measurements using a simulated desktop screen or using custom throttling, we need to add some additional configs.

The code above is an example of a configuration for measurements using a desktop simulation with 1920 x 1080 screen. Then the config variable needs to be set as the 3rd parameter when initializing the lighthouse.

as you can see, Lighthouse run measurements using screen size as per config. In this case, you don’t have to worry about the size of the opened browser window because the measurement will only be based on the lighthouse simulator screen size.

To make it easier to measure performance on different websites or paths, we can use args to receive the URL to be tested. For that, we can install the yargs plugin with the command npm install — —save-dev yargs.

Import the argv library and modify the main execution logic to use the URL from argv when running the lighthouse. We can run the script using

node lighthouse-runner.js --url <url>

We can save the result to a file so that we can process and access the data more flexibly. We need to modify the main execution logic to save the JSON result to a file.

The results will be saved with the following file structure

results
— web-page-path
— — result-file-with-execution-timestamp-name

Viewing Single Report

The results obtained are stored in a file in the form of JSON. To be able to easily read the data, you can use the lighthouse report viewer. You just need to upload the JSON file that will be used.

The lighthouse report viewer will display the results just like the results obtained through Lighthouse on chrome devTools.

Iterative measurement

We can add an args named iterative that accepts a value of a number that will determine how many tests will be performed. Also, we need to store and calculate some data such as Min, Max, Average, and Percentile for the results of some metrics that we want.

First, We need to modify the launchChromeAndRunLighthouse and saveResult functions to be able to execute Lighthouse multiple times and save each result in a different file.

We just need to call runIterativeMeasurement and saveResults on the main execution logic. So that when we want to take measurements with 10 iterations, we will get 10 different result files. Then we can calculate the avg, min, max, and percentile of all the data. In this case, we will create a new function called writeMetricFile.

The writeMetricFile function is a simple function that will process the resulting data. In this example, I will only process data for the largest-contentful-paint, total-blocking-time, max-potential-fid, and cumulative-layout-shift metrics. If you want to add another metric you can modify the metricResults variable. The function above will create 4 new files containing metric data from all iterations as well as the average, minimum, maximum, 75th Percentile, 95th Percentile, and 99th Percentile values.

Later, after we combine the functions above, the main execution logic will be as follows.

If you want to process measurements without opening the chrome window you can add — -headless on chromeFlags in the launchChromeAndRunLighthouse function.

That’s it! Now you can use the script to measure your web performance iteratively and use the results as a benchmark to determine whether an improvement will have an effect on the core web vitals score or not. You can see the code in full on the following page.

--

--