Continuous Front End Website Performance Testing

User-Centric Website Performance Monitoring in a Continuous Workflow

Adam Henson
HackerNoon.com
Published in
4 min readJun 19, 2019

--

Website performance should be analyzed from a “user-centric” perspective. Users typically look for visual feedback to reassure them everything is working as expected, otherwise they bail. In the past we viewed website performance from a pinhole. User experience design and development is vastly different nowadays as we accommodate a variety of devices and network conditions.

Load times vary dramatically from user to user, depending on their device capabilities and network conditions. Traditional performance metrics like load time or DOMContentLoaded time are extremely unreliable since when they occur may or may not correspond to when the user thinks the app is loaded.

~ User-centric Performance Metrics | Web Fundamentals | Google Developers

Website Performance Metrics

The metrics below represent important points of the page load life cycle. Each answers questions about user experience.

  • First Contentful Paint: Is it happening? Did the navigation start successfully? Has the server responded?
  • First Meaningful Paint: Is it useful? Has enough content rendered that users can engage with it?
  • Time to Interactive: Is it usable? Can users interact with the page, or is it still busy loading?
  • Long Tasks (absence of): Is it delightful? Are the interactions smooth and natural, free of lag and jank?

Lighthouse is a tool used manually or programmatically to provide these type of user-centric metrics. Each metric is described elaborately in the Lighthouse documentation — take Time to Interactive for example.

The Front End

That’s right front end engineers — your job just became even more difficult. Many performance metrics are impacted by the way we construct the DOM, optimize, bundle, and load resources.

Thanks?

Below are just a couple important examples we should consider as front end engineers. Lighthouse and its documentation of these metrics can provide a more comprehensive overview.

Imagery and Video

Lazy loading aside, we can improve website performance by simply optimizing and reformatting imagery. In my experience this culprit has shown the largest impact on websites suffering from poor performance. Consider serving your imagery in “Next-Gen” formats.

Another heavy hitting impact can come from the way we load large resources like images and video. When possible lazy loading can be quite effective. Read more about lazy loading here.

Bundling JavaScript and CSS

Another major culprit I’ve seen as the cause of poor performance in many websites roots in the way assets are bundled and loaded. I’ve found the technique of bundling assets in smaller chunks to be quite effective in improving performance. If you can swing it, loading critical resources on page load while loading non-critical resources on demand can make a tremendous difference. By using modern build tools like Webpack for example, we can accomplish this with techniques such as code splitting and tree shaking.

Website Performance Monitoring

By using Lighthouse integrations like Foo’s Automated Lighthouse Check, for example — you can performance test your website automatically over time. It provides a timeline view of important metrics like “First Meaningful Paint”, “First CPU Idle”, “First Contentful Paint”, “Speed Index”, “Time to Interactive”, and “Estimated Input Latency”. Below in the Twitter example — you can see how beneficial it could be to correlate a performance drift to a specific time and day. In this example we see Time to Interactive and First CPU Idle spike up in milliseconds (which is bad) around April 22nd at 10am.

Twitter Performance Drift

You can see performance examples of other major websites here on Automated Lighthouse Check. Adding pages to monitor performance is easy. See the docs for a guidance.

Monitoring Website Performance Automatically in a Continuous Workflow

Example Continuous Delivery Diagram from circleci

Using tools like Jenkins, circleci, or others, combined with Automated Lighthouse Check — you can add a post-deploy step in your pipeline to automatically test performance. Automated Lighthouse Check will trigger a performance audit, save it with a tag (typically a build number), and make it available to view in the timeline chart among any other performance tests triggered manually, automatically, or from a post-deploy step (an API call). To establish this performance regression testing step you would simply add a request to the Automated Lighthouse Check public API. For instructions on how to do this, you can read this article.

Conclusion

Website performance has gained more importance in modern web development. Considering the diverse ways users view and interact with websites — we need to look at website performance from a user-centric perspective.

--

--