3 reasons fixing Core Web Vitals failed

Timothy Bednar
Waterfaller.dev
Published in
3 min readApr 2, 2021

There are dozens of good articles that explain Google’s new Core Web Vitals and how to fix page speed issues. This post will be helpful after you have failed to fix an issue found on your Core Web Vitals Report.

Over the summer, I wrote an epic for largest contentful paint (LCP).

LCP for blog posts must be less than 4 seconds

This epic has been open for 3 months and contained a couple of dozen stories. In the strictest sense, I can close it as it meets my definition of done. However, if I zoom out, I still have a problem with largest contentful paint.

Samples from Core Web Vitals Report

This epic originated because our Core Web Vitals Mobile Report showed that most of our poor scoring URLs were blog posts. So I took sample URLs provided by Google for acceptance criteria. (This was my mistake.)

As an application, I want to load a featured blog image that that is no wider than 420px.

This story and others like it were tested with our sample URLs.

Each used the same sample of poor-performing URLs in the acceptance criteria. I can close the epic as all these URLs met the 4-second benchmark. But the Core Web Vitals report is still red. Needless to say, the business believes there is still a problem and probably wonders if I am competent.

Sample set versus total population

I know the difference between a sample and a total population. In this case, my sample was ~20 posts and the total population is ~1,200. So I carefully selected the test URLs making sure I had all categories and locales represented. I assumed improvements to the sample set would improve the entire population. This was not the case.

Reason #1 — too many total requests

According to the httpArchive, a typical page makes 70 total requests whereas our posts make 130. In my experience, pages with lots of requests are more likely to experience a slowdown under real conditions.

When testing our blog posts with Lighthouse I see unexpected variances in page speed scores. When a page is fast, then is slow, then even slower still, and then fast again, every analysis points to a new slowdown. The solution here is to prune the number of requests to stabilize page speeds.

Reason #2—test device not representative

For testing, I use an emulated Moto G4 on a fast 3G connection with a 4x CPU slowdown matching the test device in Lighthouse. While this is a good proxy, it is clear that I need to identify a device and connection that better represents our slowest experiences. This leads me to the third reason I failed.

Reason #3 — missing in-house field data

For three months, we sent Web Vitals to Google Analytics to generate in-house field data. The problem is that Google Analytics does not report the data correctly. Even pulling Web Vital events into Data Studio resulted in averages. Despite all my noodling, I can only see the average LCP for a page. This is at best useless and at worst misleading. Google says here:

With regards to Web Vitals, Google uses the percentage of “good” experiences, rather than statistics like medians or averages, to determine whether a site or page meets the recommended thresholds. Specifically, for a site or page to be considered as meeting the Core Web Vitals thresholds, 75% of page visits should meet the “good” threshold for each metric.

In the future, my “sample” will come from my in-house field data NOT the search console. This way I not only know what URLs to test, but what devices I need to recreate the slowdowns. Then I after a release, I should be able to judge success quickly.

Conclusion

In order to be more efficient in monitoring and fixing Web Vital issues in the Search Console, I need to improve page stability, test devices, and in-house field data.

I’m daily working to improve page speed, and as a side hustle, I created Waterfaller. I appreciate your comments.

--

--