How to set up automatic Lighthouse testing for CI/CD

Jacob Overgaard
Aug 18, 2020 · 6 min read

Lighthouse by Google has become increasingly more popular in recent years — even to the point where companies start seeing measurable increase in profit when their performance is improved.

Photo by Paulius Dragunas on Unsplash

Some colleagues and I thought to ourselves — why not have a way for our developers to see how their projects perform in Lighthouse. Preferably we wanted the test to run in our CI environment Azure Pipelines. We set some goals for the project, and I went to work:

  1. A continuous measurement of several Lighthouse categories every time the code changes
  2. Each project should be able to define their own goals and have them measured
  3. The Lighthouse score should be tracked on a timeline to see the historic trends

The perfect tool

The Google Chrome team already published a CLI for Node, that can run the tests, generate a report, and upload that report to a reporting server.

We did some research and found an extension to Azure Pipelines that wraps the CLI from Google, including an article explaining how to use it, and I discovered a Docker container wrapper for the Lighthouse CI reporting server. That was pretty much everything I needed.

The architecture for the Azure Pipelines build could then look like this:

Diagram of our architecture in Azure Pipelines
Architecture of the Azure Pipelines setup


With the extension in hand, it was pretty simple to integrate the tasks in the existing build pipeline. I ran into a dilemma here, though.

Do I run the Lighthouse test in the build or in the release pipeline, or both?

To make a long story short, I ended up adding an extra deploy stage to the build pipeline using Azure’s Multi-stage YAML Pipeline which deployed the project to a dedicated CI environment before running the Lighthouse test — everything happens inside the build pipeline. This allowed first and foremost to see the result of the build in connected applications, like BitBucket, but also to actually mark the build as failed, if the Lighthouse test did not pass — just like any other unit or integration test.

It was simple to add an extra stage for the Lighthouse test after that, which is dependant on the deploy stage:

Test CI stage (build and deploy stage not shown here)

I added a Lighthouse CI configuration file with some presets to the Git repository:

The configuration file with the presets for each task — collect, assert, and upload.

I wanted to run the test 3 times in a row using a headless version of Chrome (Puppeteer), because the first run would probably be slower than the following runs, because the website would still be in a state of rebuilding caches and so on, so it was neccessary to have at least a few test runs.

I wanted to use the preset called “lighthouse:no-pwa”, which includes all of the default tests except PWA tests (I am testing just a server-side rendered Angular app), and I wanted to include a few assertions of my own to illustrate how to set it up.

Note how I was actually a bit too aggressive in the scoring of “performance”, which I should probably adjust at some point — after all I was running the test from a build server on a B1 app service plan. The aggregation method optimistic meant, that the report included only the best result of the 3 runs.

Uploads the report to an “lhci” target server. The rest of the parameters, like url and token, were provided in the YAML file using build variables.

The build then looked like this:

Screenshot of Azure Pipelines build indicating a failed “lhci” stage
Screenshot of Azure Pipelines build indicating a failed “lhci” stage

Notice how the “lhci” task was marked with an exclamation mark? That was because it failed the assertions, that I set up in the configuration file, but it still allowed the build to continue because of the continueOnError: true property in the build task.

That should make the developers react ASAP :-)

The build log already started to look interesting, where you might notice stuff that probably should be looked in to:

Azure Pipelines build log of LHCI run
Azure Pipelines build log of LHCI run
Log of the Lighthouse CI output in Azure Pipelines

The output of the tool was a report of all assertions, which could be uploaded to Lighthouse itself or, in my case, to the Lighthouse CI server.

Lighthouse CI reporting server

The architecture for the reporting server was pretty straightforward — an app service backed by a Postgres database:

Diagram of architeture of LHCI setup in Azure
Architecture of the Lighthouse CI reporting server

The reporting server had a web UI, which is neatly built and focuses on a comparison chart between two reports:

A comparison of the latest build to a base build showcasing, that you shouldn’t trust performance scores
A comparison of the latest build to a base build showcasing, that you shouldn’t trust performance scores
Comparing latest build to base build — do not trust the performance scores on a CI environment

It also had a timeline for several measurements:

Showing a timeline of the accessibility measurement group where some tests are green and other red
Showing a timeline of the accessibility measurement group where some tests are green and other red
Timeline of the accessibility measurement group

Each report could be opened in Google’s Lighthouse Viewer for even more information as well.

So it all looked great and fit neatly together, and the developers now had a nice timeline and distribution of each measured category.

What have I learned?

It turned out to be pretty easy to integrate the Lighthouse CI tool in the build pipeline, but after a month or so of tests, it would seem, that the test tool simply fails once in a while due to Puppeteer timeout, but not a big deal.

Where to run the test
I am still unsure if running the test in the build pipeline is the best solution, because it does take a while to run. I considered making the build pipeline trigger another pipeline to run the test side-by-side, but that would not be feasible if the test was running on the CI environment, because another build might overwrite the environment before the test finished. And I would also lose the possibility to mark the build as failing.

Where is the best place to target the test
Azure has a feature called ARM (Azure Resource Manager) deployments, which enables you to spin whole environments up and down using predefined templates. If we wrote templates for the project, we could simply spin up an entire environment for the sole purpose of running the test, then taking it down again, or repurpose it for the next test. That would at least make sure the test is running on a pure environment, but the setup of this is complex.

Which is the best URL to test
I spent some time considering which URL should be tested; for ease of use, I chose to test only the frontpage, because I assume that is a hotspot. But the frontpage of our CI environment pulls content from an Umbraco Cloud dev environment, which is currently full of QA test content, so I am not sure that is the best page to test. Another consideration could be to have a dedicated test page containing various content and widgets would have been better suited.

Thoughts for the future
If the project is containerized, a fun thought would be to spin it up directly on the build server and test it there. Hardware-wise that would not have been worse, than our base CI server, but I did not go down this road for sake of simplicity.


  1. Thanks to Gurucharan Subramani for a great guide and Azure tool:
  2. Here’s the Azure tool I used:
  3. The Lighthouse CI client package:
  4. The Lighthouse CI server package:
  5. Thanks to Patrick Hulce for building and maintaining the Lighthouse CI Docker wrapper:

IMPACT Developers

Leading digital agency in Denmark