Understanding what Core Web Vitals means

Jan Maly
Kurzor
Published in
14 min readSep 3, 2021

Often seeing non-technical people being puzzled over it, I decided to take an effort to explain what Core Web Vitals actually means and what you can realistically expect from it. You will learn that this is not just another search engine ranking criterium or some abstract numerical score. Core Web Vitals has rather serious implications for how people perceive your website, and keeping track of these scores might be an important bit for improving your business case.

Webpage loading speed was always an interesting subject that attracted a lot of developer attention, but often lacked interest from business and decision-making people. Probably because it seemed kind of excessive to invest in such optimizations. Further, in the past, I even remember hearing arguments like “people should invest in faster internet connection” or “I don’t care about visitors with older phone models”.

That changed during the last decade. As it is the case in many other areas, Google is to blame — they were the first to provide tools like Pagespeed Insights. And since Google has the ultimate power of determining the position of your website amongst search results, people started taking interest in it too.

This article explains what Core Web Vitals (CWV) are and why they matter. It is not aimed at developers or website optimization gurus. Instead, it tries to reach people who have no technical knowledge and are basically puzzled by hearing about CWV everywhere, having a hard time imagining the meaning, impact and scope of changes that optimizing for CWV means.

What is Core Web Vitals

The Core Web Vitals are actually a part of what Google calls Web Vitals, which they define like this:

Web Vitals is an initiative by Google to provide unified guidance for quality signals that are essential to delivering a great user experience on the web.

Mark the words “user experience”. Yes, UX. It turns out one of the major selling points of this movement is to underline the impact of optimizing CWV on your visitors: if you upgrade your scores, people will be happier!

Core Web Vitals result is not represented by a single number — do not confuse this with the Page Speed Insights reported score, as explained below in the “How to check CWV” section. It is composed of three metrics, two of them measured in seconds, one being an artificial number:

  • Largest Contentful Paint — LCP (in seconds)
  • First Input Delay — FID (in seconds)
  • Cumulative Layout Shift — CLS (decimal number)

There are of course many more keywords that can be measured, such as TBT (Total Blocking Time), FCP (First Contentful Paint) etc. They are either used as supplemental to the CWV computation or made obsolete by them, nevertheless, you can still measure them to gain new ideas.

Before diving into more details, let’s explain two different types of measurement, as Google describes them.

You can measure your results in a lab, which means doing some tests as part of the development process. Often automated, these are somehow indicative, but since the web browser of your potential visitor is simulated only, they carry limited information.

An ideal use case for a lab test is to capture significant score degradations between website releases. That way, you will immediately be alerted to the fact that some latest change made the website performance way worse (or better?).

Secondly, you can integrate the measurement into the actual production website and follow the results from real users. This kind of testing is called in the field. These results might vary dramatically as they are tied to many more variables, such as various devices, different user behaviours, network speeds etc. Sounds complex? Well, you do not need to perform this type of measurement on your side, this is provided directly by Google and provided to you via Google Search Console — we will explain this later.

The scores

The Google team spent a lot of valuable effort in scoring the factors objectively. There is some real science from the UX research behind it, provided in detail here. It boils down to classifying your site performance as the following levels:

  • Poor: you should really pay attention to fixing the issues.
  • Needs improvement: workable state, in no way ideal.
  • Good: perfect, no action required.

For every factor in CWV, there is a range of values for each level, and if the resulting measured number falls into that range, you obtain a given level and color for that criterium.

We will specify the levels for each of the factors separately below.

For the overall performance classification of a page/site using “in the field” testing, it is also important to note that the 75th percentile is considered. That means, if at least 75% of the page views meet the “good” level, you gain it, which classifies the other possibly “non-good” 25% as edge cases. This way you don’t have to worry that one visitor with a very obsolete or otherwise intricate browsing device makes an impact.

Also, your performance scores are segmented between mobile and desktop devices. This is because it allows Google to more precisely target search results for a given device, meaning your mobile-optimized site will probably gain some advantage when displaying search results from mobile devices.

With that in mind, let’s open up with the first CWV factor, which is:

Largest Contentful Paint — LCP

…is all about the loading speed of the page. Before CWV, there were multiple metrics like the First Contentful Paint (FCP) or First Meaningful Paint (FMP). The only thing they did was, well, to spread confusion over which means what, essentially.

LCP is a step ahead, clearly being positioned in the direction of normal user understanding of the web page.

LCP focuses on your initially visible portion of the page when it is loaded — also called “above the fold” sometimes, a term originally coined for the visible “selling” side of the newspapers. Here it locates the largest element that makes some sense and measures how long it takes before it is finally presented (painted, hence Paint) and is visible to visitors. Most often this is something like a main image or visually compact body of text.

And this is basically all you need to know about LCP without diving into too many technical details important only for developers.

The trivia behind this is simple: larger objects on screen gain the most attraction and are therefore significant for the users. This is why we need to measure how fast they load. Exactly as the folks at Google say:

Largest Contentful Paint is an important, user-centric metric for measuring perceived load speed because it marks the point in the page load timeline when the page’s main content has likely loaded — a fast LCP helps reassure the user that the page is useful.

LCP score ranges are:

Optimizing LCP is all about optimizing your website load speed to the point that reasonable information is presented, that is minimizing the time of loading. There are a number of common optimization strategies and I list them below using the exact same keywords that the optimizing tools will tell you. They are difficult to explain to non-developers, still allow me to try:

  • Optimizing server response time — important if your web server is visibly lagging the whole site down — basically you see a white space instead of any content for a long time. The reason could be somewhere between server hardware, website server software, or network issues. Optimizations must take place in the faulty part — most often by enabling things like caching, upgrading web server software, increasing server hardware resources, or rewriting your website server code. For example, you can save some time by preparing difficult computations or database queries in advance and storing just the result for web page load, instead of performing these on the fly. So the complexity of this optimization depends a lot on how many changes are needed on the server.
  • Removing blocking JavaScript / CSS — the difference is that this time, the wait is a problem of what we call “client code”, which is executed in the visitor’s browser. These assets are blocking the displaying of vital information, usually because somebody marked them too important. So they must load before we display anything else. The optimization here is to reduce the amount of this code, e.g. by loading it later, or loading it faster by — surprise! — optimizing server response time.
  • Using the Lazy-loading concept to only load images or other media just in time, as they are about to be displayed. You don’t need to load images that are “below the fold” immediately. This technique is well within support of today’s web browsers. Usually easy to achieve.
  • Speeding up resource loads — most often images are too large by file size. Could be optimized by using image compression strategies highlighted already in our great article. Compression of text resources is also nice to have, essentially “zipping” whatever is served by your server and is of textual nature. This is most often client code — JavaScript, CSS — and it leads to a significant reduction of transferred bytes. This optimization usually depends on enabling settings or installing 3rd party tools, so is not very hard to achieve.
  • Pre-rendering dynamic parts — sometimes your web developers go all the way with making your website dynamic and use the latest cool technology such as React. This could lead to large parts of the web page being intricately assembled prior to displaying when the page is loading. Optimization is pre-rendering: precomputing the result on the server in advance and showing it when a web page is requested. That way your browser just treats it like a normal web page, something it is very skilled in displaying. Some clever implementation is needed, however, to make this work seamlessly if user interaction and further dynamic behaviour are expected. Complexity is therefore in the higher ranges, but some server software allows this in a very comfortable way.
Figure 1: Largest Contentful Paint identified at our website during example testing.

First Input Delay — FID

…is all about the interactivity of a given web page. First Input Delay tries to measure the time your web page needs to start responding to the user’s first interaction. Interaction meaning clicking, tapping or key pressing, essentially, so fancy mouse-over (hovering) effects do not count.

Your browser might actually be blocked from responding to such an event for some time, most usually because of delays induced by client-side code. You probably experienced this before: after loading and presenting you with some controls, poorly conceived web pages frustrate you by doing nothing for several seconds?

It is evident this factor cannot be reliably measured without the user present, hence it is not reliably reported by lab testing and should be considered field-only. Your web page can be however lab-tested for so-called TBT (Total Blocking Time) which essentially correlates with FID.

FID score ranges:

Optimizing FID is essentially about making your site responsive, by finding the source of the website being stuck, and fixing it:

  • by dismantling long, complex client-side code into smaller parts,
  • by delaying execution of non-needed code,
  • by lowering the visual complexity of a web page — a table with 1000 rows is probably not very usable anyways, so let’s make it more readable by paginating it.

So, essentially, most optimizations aim to reduce complex web pages into something more simple.

As you can see in Figure 2, helpful tools such as Lighthouse can navigate developers to the bottlenecks and recommend optimizations of those places. You don’t have to understand it into too much detail to see a lot of red indications of second-long values.

Figure 2: Quite poor performance for mobile cnn.com reported by Lighthouse. Notice the Total Blocking Time being 1.67s — Lighthouse offers several detailed tips where TBT can be improved.

Cumulative Layout Shift — CLS

… is all about measuring content stability. Imagine a typical news site when it loads for the first time with that article you so much want to read. You start scanning the first paragraph, and then it happens. Content shifts because some ad was loaded on top of it, some information bar about cookies jumps from the top as well, and you quickly lose focus.

Hate it, right? We can measure such bad practices by introducing a number called Cumulative Layout Shift. Its function is to collect all these infractions over the page initialization and loading period, quantitatively scoring how each of them contributes to layout shifts, and the sum equals the CLS (hence the word “cumulative” in the term).

The CLS number is kind of artificial, but you should remember minimizing it is the goal here. As CLS score ranges tell:

CLS score ranges:

So CLS essentially says, content on the page should move only minimally or ideally not at all, except if the visitor wants it to.

Optimizing CLS is maybe the least difficult of the three. Usually, there are few places found on typical web pages that need the following improvement:

  • When loading images, you should reserve a space in advance, which the image will occupy on the page. Many times, this is forgotten and the content then jumps as images are added. Using some placeholder like a colored box is a bonus to help users orientate.
  • Making sure uncontrolled content loaded dynamically (many times this is an ad/banner added by external marketing solutions) are constrained into a predefined area. Meaning, the content on the web page is not moving when ads/banners display.
  • Avoiding dynamically (=via client-side code) controlled parts of the site that would increase their size without appropriate user’s interaction.

In real-world examples, there is a difference between content shifts:

  • Accordion-type containers (switching panels when user clicks on them) is fine, as this change in content positioning is a result of user interaction, so generally, something that was expected.
  • A list of articles, that initializes later after content is displayed, and moves all content below it doing so, is a CLS infraction. Nobody asked for this and it just disorients the visitors.

In Figure 3, I again show the helpful output of Lighthouse in regards to CLS. You don’t need to be a developer to see highlighted sections of a page, contributing to poor CLS.

Figure 3: Rundown of CLS issues found on mobile Amazon Prime: Lighthouse presents elements significant to the layout shift and their contribution in great detail.

What are the benefits

Well, for one, having a site in that green bar is a good feeling!

Jokes aside. The major benefit of a good CWV-optimized design is that your site is much more usable for the ordinary people visiting it:

  • decreasing the number of people leaving your page out of sheer frustration,
  • increasing the number of people accomplishing the goal that you have set,
  • improving the business effect of such a website.

And of course, Google keeps track of the performance of your site and uses it internally for ranking purposes, even if we can only guess how exactly. Actually, here they offer some hints. The end impact on ranking should not be drastic, it is merely one of the factors for your rank position.

From the benefits explained above, you see that CWV-optimized sites are at least good to have. Especially if you focus on fixing the low-hanging fruit first — tackling issues with a large impact that are relatively easy to fix.

How do I check the CWV?

A very good overview of measurement tools is provided by Google, and when you check it, you see a lot of them are development-only tools. Let me concentrate on the ones that can be quickly inspected by anyone without the needed technological background.

For “in the field” testing, use Google Search Console and their Core Web Vitals report. It uses real user data collected over the lifetime of the website, hence being quite representative. It is very broad, however, identifying groups of URLs with a problem, so for more detailed analysis use PageSpeed Insights, which works with both field and lab data, and can be used to

analyze a single web page in great detail.

Developers usually use the Lighthouse tool, which we made an extensive article about already.

Steps to take

1 — First, make sure you have access to the Google Search Console of your website. This is described in depth here — essentially, you verify that you as a Google user have access to a given site. Please note it can take up to a few days after adding a new site to gain access to the results (they are, however, collected even if you don’t have access).

2 With the Search console operational, switch to your website in the left navigation and follow the Core Web Vitals link (Figure 4):

Figure 4: Core Web Vitals link, amongst other Experience tools in Google Search Console

3 — You are presented with a graph of analyzed website addresses (URLs) from your site for the past 3 months, which is analyzed separately for mobile and desktop URLs. You see the green/orange/red segmentation (Figure 5).

Figure 5: An overview of CWV results of your website during past 90 days.

Important note: to be in the graph, the URL has to meet the following criteria:

  1. Be indexable at the given time — meaning, robots are allowed to parse it
  2. Some CWV results do need to be recorded and a minimal amount of data gathered for the page (meaning, some pages with low or no traffic are omitted).

4 — You can use the OPEN REPORT > link to gain access to a more detailed report, in which you can see the issues that make your URLs red or orange.

5 — Each issue can be clicked for detail, you will see example URLs in which this issue is found and a link to run a test. Following this, you land in Pagespeed Insights, which is a very simple-to-use tool to use, even as a standalone page tester: just enter the URL of the page you want to analyze and hit “Analyze” (Figure 6).

Important note: in the issue list, you also have the option to Validate Fix — which is clever to do once you are sure the issue is fixed, this tells Google to revalidate the page.

Figure 6: starting point of the Pagespeed Insights tool.

6 The analysis then shows your overall results separately for mobile and desktop. Now to dive deeper into it, some important remarks:

A — The overall score (Figure 7) is not actually the score used to recognize CWV compliance — it is produced by running the Lighthouse tool over the URL in a controlled environment, thus producing lab data. Lighthouse also takes into account more factors than CWV (6, to be precise).

Figure 7: Overall score of the tested web page.

B — CWV compliance is reported only if there is enough field data for a given URL. In that case, these bars are what you should be interested in (Figure 8):

Figure 8: The 3 results bars that are actually relevant to the CWV (and one directly irrelevant, FCP)

For the web page to pass green in CWV, all 3 CWV criteria (marked by blue ribbon in the image above) need to pass green in the 75% percentile, as we said at the start. In other words, none of them is allowed to drop below 75% in the green bar.

C — Below, you will find 6 indicators measured by Lighthouse (again, those are lab data!) and recommendations, that are produced based on these. As we discussed, only two CWV factors are lab-only, LCP and CLS. These are again marked by a blue ribbon (Figure 9):

Figure 9: Lab data analysis of the web page. This is a test that is performed in Lighthouse, in a simulated user environment based on the device type (desktop or mobile).

Of course, this was just a brief introduction meant for non-technical people. We can go into much more detail concerning specific projects and issues.

Contact Us, if you need further assistance with your website. At Kurzor, we have a history of optimizing page and website performance long before CWV was introduced. We will be honored to help you with your project.

--

--

Jan Maly
Kurzor
Editor for

Website and web-based technologies enthusiast, developer, UX analyst, co-founder of Kurzor. In the business since early 2000's.