Monitoring page speed continuously with Lighthouse, DataDog and GitLab

Olga Skurativska
SMG Real Estate Engineering Blog
4 min readMay 4, 2021
Photo by Viviana Rishe on Unsplash

One of the reasons why we’ve been putting performance on the back burner was that we didn’t trust our measurements. The variability of the score was high, we used different tools to measure performance and kept comparing apples to oranges. And most importantly we have attributed all the negative impact to the third party scripts and gave up on checking our own code.

This post is a part of the series: read more about the performance problem Homegate was facing in the first post of this series: Optimizing for Core Web Vitals at Homegate: things we learned along the way

We needed a new way to monitor performance that would allow us to:

  • Measure performance automatically
  • Measure frequently (and look at the average values)
  • Measure in a uniform way
  • Measure with reduced variability
  • Measure both complete pages and pages with no third-party scripts
  • Make measuring require as little setup for the new projects as possible
  • Make historical data accessible (so that we can correlate performance drops with our own releases as well as the changes in third-party code)

Lighthouse scores as DataDog metrics

Since Lighthouse is available as an NPM package, it’s quite easy to run it on a given URL as a scheduled job.

Wiring it together

With Homegate’s setup the easiest way to do so was a scheduled GitLab job run in a Docker container containing Chrome. The results of the Lighthouse run can be reported as HTML (similar to what we see in Chrome Dev Tools UI) and JSON. We save the former as a GitLab job artifact (in case we would like to review it) and parse the latter to send the audit scores we want to monitor to the DataDog as custom metrics.

What we monitor

Beyond performance score and individual performance metrics, we also monitor:

  • accessibility, SEO, PWA and best practices scores
  • the amount of the downloaded javascript in Kb
  • server response time in ms

With or without third-party scripts?

Since we were interested in the impact of the third party scripts on our overall performance score, we wanted to run Lighthouse twice for one page — once with all the scripts and once without the third party ones.

Luckily it is possible to block certain domains during the Lighthouse run Lighthouse by specifying the patterns in the the custom config (see https://github.com/GoogleChrome/lighthouse/blob/master/docs/configuration.md for reference).

Setting up a job that checks a bunch of URLs on a regular basis is really easy now. For each new project we want to monitor we add a JSON config to the repository. Things to specify — a URL to test, a DataDog metric namespace and a custom Lighthouse config (if needed).

Get the full source code for the lighthouse-datadog GitLab job here.

Setting up the dashboard

To display our newly sent metrics we used DataDog’s Query Value widgets that are able to:

  • display the last value from the given timeframe
  • apply conditional formatting

Conditional formatting is useful to display data similarly to how Lighthouse does (green-orange-red values). This way we can see at a glance where we stand.

Lighthouse scores for one of our pages: with and without third-party scripts

Each metric can be viewed plotted over time if we expand the corresponding widget:

For the total performance score we've added the widget displaying a trend line:

The unexpected benefits

The dashboard was intended to provide quality insights to our engineers. What we didn’t expect is for it to become a communication tool for many different parties:

  • Product managers and data analysts were able to see the impact of the scripts they injected
  • Engineers were able to use the dashboard to illustrate the necessity of prioritizing performance improvements over the new features.
  • A little legend with the links to related Google documentation helped all of us learn more about the Lighthouse and its metrics.

This is the third post of the “Optimizing for Core Web Vitals at Homegate” series. For the full story be sure to check other posts:

--

--

Olga Skurativska
SMG Real Estate Engineering Blog

A Principal Frontend engineer at Homegate that just won’t shut up about JavaScript, serverless, a11y, performance, UX, DX, design systems and tech leadership.