This was the first year I attended Google’s huge developer conference, I/O. There was a massive array of topics to learn about, but given my focus at Compare the Market is all about our web offering — I chose to turn my attention in that area.
After the traditional keynote speeches that kicked off the conference, I watched many talks and presentations around Web technologies. The main message that seemed to push through in several sessions though, was around website performance and budgeting.
So instead of listing all the big “wow” moments at the conference, I thought it made more sense to talk about the underlying performance focus and how Google suggest that the use of budgets can help to achieve that.
Speed is the most important factor for the customer
According to a recent study by Speed Matters, the biggest impact on user experience is the time it takes to load a page. This is more important than how easy it is to find things, how well the site fits the user’s screen, how simple the site is to use and also how attractive the site looks.
This also correlates with the likelihood that a user will bounce from the site if it doesn’t load fast enough. When a site goes from one second load time, to six seconds, users are more than twice as likely to bounce [Google / SOASTA].
Many examples were given of household names that had huge success when using this approach to optimising their web experience — Pinterest, Walmart, The Times of India, The Telegraph, Twitter and many others have all implemented performance budgeting as a way of tackling their website speed.
But how does it work?
Define the metrics to set budgets against
There are many metrics for measuring website performance. These can be split into three major groups: Quantity, Milestone, and Score based metrics.
- Milestone based metrics are there to measure when the user starts to see the website appearing on their device, when they can start using it, and when it’s fully loaded. Two key metrics here are “First Contentful Paint” (FCP) and “Time To Interactive” (TTI). FCP measures the time from navigation to the time when the browser renders the first bit of content from the DOM, whereas TTI measures the time from navigation to the time when the website is useable. If you pair these two metrics with “Fully Loaded”, then you will be able to see a good representation of the story of your page load.
- Score based metrics such as Lighthouse or GTMetrix , will grade your website based on a number of different factors by analysing the website sent to the client. The more optimised the website is, the higher the score or grade will be.
It’s important to get a mix of these three areas into any budget so that you can cover all the bases — making sure the site appears quickly on the browser, doesn’t download too much in the way of code and assets, and also that you’re keeping a close eye on best practice.
Assign amounts for the metrics you have chosen
Choosing the right budget amounts for your chosen metrics can be done in a number of ways.
- Use a performance budget calculator, such as the one by Jonathan Fielding over at www.performancebudget.io. This calculator tells you the amounts you have to play with if you want your site to load in x seconds over a particular connection (edge / 3g / 4g / cable).
- Use the Chrome User Experience Report (CrUX), to help create budgets that put your website in a good place versus your competition (and other non-directly-competing yet popular websites). By using the data in the public Google BigQuery project for CrUX, it’s possible to examine the various paint metrics such as FCP and DOMContentLoaded on popular websites over time.
Improve your website with help from various tools, and get within those budgets
If you’ve used options one or two above for setting your budgets, then you may well need to do some work to improve your website, so that your site now sits within these boundaries.
Luckily, there’s lots of excellent tools out there to help achieve this.
- Google Lighthouse is the obvious first port of call (excuse the pun), and will pinpoint many opportunities to improve a website’s load speed. You can run this in a number of ways — on their website, or on the new web.dev site as well. There is also a chrome extension, plus you can also run it from Chrome’s developer tools under the audits tab. You can also run Lighthouse as a node module or even from the command line.
- Chrome developer tools are also invaluable for helping developers see precisely what is going on as a website loads. You can simulate various device and network types to examine how the page loads in different scenarios. Under the network tab, you can record the loading of a page, and see not only a waterfall view of all the assets and requests being loaded — but you can also see a filmstrip of the page load over time. This is a useful way to understand the flow of the page as it loads, and what your users will see as the seconds pass by.
- https://www.webpagetest.org is another way to examine the resource waterfall for your website page load. The key with webpagetest is that you can configure it to hit from different locations and with a variety of device and network connection types. The site will also grade you across a number of metrics, and allow you to drill down into the data it gathered in order to diagnose where any bottlenecks are.
- https://gtmetrix.com/ is also a handy resource for examining your website. A bit like webpagetest, you can configure the type of connection and device, plus also the location of where the test runs from. The results are comprehensive, and it’ll give you scores based on it’s own grading, and also the YSlow open-source project. Both these reports will show you where the opportunities lie for improving your page performance.
Monitor and alert against your budgets
Once you have your performance budget, and you’re also sat nicely within those budgets, you now need to make sure you don’t break through them. Keeping on top of this can be done in a variety of ways.
- Regular synthetic testing. Tools such as Calibre or SpeedCurve can be configured to performance test your pages from specified locations, on a large range of device and connection types, at chosen intervals. You can then start to build up a set of data over time for your key metrics, and alert whenever those are broken.
- Real User Monitoring (RUM). While it’s important to have synthetic tests running against your website, it’s also critical to make sure those correlate with what your actual customers are experiencing. This can be done with RUM. There’s many tools out there to track this, examples would be New Relic, Pingdom, mPulse, and announced at I/O this year was that Firebase now has web RUM alongside it’s app monitoring. These can be set up to track key metrics, and alert you when averages break your budgets.
Google I/O sessions to watch on these topics
I would highly recommend watching these three talks around performance, tooling, and real world examples from this years’ I/O.
Speed at Scale: Web Performance Tips and Tricks from the Trenches (Addy Osmani and Katie Hempenius)
Demystifying Speed Tooling
(Elizabeth Sweeny, Paul Irish and Amir Rachum)
Building Successful Websites: Case Studies for Mature and Emerging Markets
(Aanchal Bahadur, Matt Doyle, Jesar Shah, Charlie Croom, Rudra Kasturi)
Hopefully that’s given a bit of insight into one of the main underlying themes at Google I/O 2019. Roll on next year.