Browsing the Web while watching it’s vitals

Dion Almaer
Ben and Dion
5 min readApr 6, 2020

--

TL;DR I built some Chrome Extensions that show you vital metrics for sites as you browse the Web. Here I discuss the current metrics of choice, the extensions, and finish with some magic.

NOTE: This was originally posted on my own corner of the Internet.

It sure is unprecedented times. My mental state has fluctuated between anger, frustration, grief, guilt (privilege), and beyond. I am thankful for the people in my life who have been amazing, and I am globally thankful to all of the people who are either staying at home to help flatten the curve, or who are courageously doing their duties.

Working (and loving!) from home has been a reset, and I am trying to use that reset by setting up new habits (tiny!) whilst also not being hard on myself for hitting them all.

One of these has been giving myself time to write a little code on something I have wanted to have in my browser for awhile.

What’s the latest with Web Performance Metrics?

We all want to have great metrics that reflect the quality of our web experiences, and use them as guides for how we can improve.

These metrics have changed over time, as we get better at understand the mapping of the browser engine to parts of the experience.

At web.dev/metrics you will see a current set of metrics that each look to understand a different piece of the overall experience:

Load time:

Interaction:

Predictability:

At the last Chrome DevSummit, we spoke about the evolution of metrics, and how they are rolling out across our tooling.

For example, in the field it always starts with Chrome instrumentation, which will then get picked up in CrUX, which will be reused in tools such as PageSpeed Insights.

In the lab, you get access to them as Lighthouse and DevTools implement support.

Chrome Extensions that show the metrics

github.com/dalmaer/lcp-chrome-extension (The LCP extension)

After spending time looking at metrics while developing, and spelunking in the CrUX datasets, I found myself wanting to feel the impact of the metrics, so I decided to build some ambient Chrome Extensions.

There are individual extensions that show scores as the browser’s PerformanceObserver spits them out, and change color depending on the thresholds of each metric.

For example, the LCP extension will be green if it occurs in less than 2.5 seconds (a good score), yellow if between 2.5 and 4 seconds (an adequate score), or red if it takes longer.

There are equivalent extensions for FCP, FID, and CLS.

Now that I have been browsing around the Web, what has surprised me the most is how variable the results can be with N=1. If I load a page while a message comes in to Hangouts Chat and I am on a VC…. it may be slow. When developing, it’s important to isolate your environment… something that we very much notice with Lighthouse development too.

It is also kinda fun to dive into a page when there is an expectation mismatch. I will think to myself “huh, that seems like a decent load, why is the CLS so bad?” and start digging…. and come to find that the web fonts loaded slow and did a swap, causing layout instability (most of the time its ads loading late and resulting in the change to be fair).

It’s been really nice having the traffic lights give me some feedback as I browse and letting me explore, but this is only the beginning. The product teams are working on much better tooling that can showcase this type of information, and also tie deeper into tooling to give you actionable feedback.

Don’t forget the magic

Hitting these thresholds are good guides for a quality experience, and they are only going to get better over time. It is important that we all take them into account, but also important not to do so blindly.

I like Derren as he explains that his magic is not magic (as it doesn’t exist 😉

I have been enjoying me some Darren Brown action, such as the classic video above. He used a series of techniques in his shows, and good ole misdirection is always involved.

Misdirection is a key trick with UI development too, as the goal centers around the perception of quality. This can sometimes be a touch at odds with pure metrics, as the browser engine may not be fully aware of the tricks that you are playing.

This is why it’s important to consider custom metrics such as the Element Timing API to measure what is really important to see on the page (which may not be the largest element that LCP will fire).

There are some absolutes (human perception research shows those), but it always makes sense to go the extra mile and really understand how users are using your site…. and using the appropriate tricks. It turns out we are simple bears that can only focus on a small area 😉

--

--