Introduction to performance monitoring with Speedcurve

Conor Malone
Making Gumtree
Published in
5 min readFeb 11, 2021

Last year at Gumtree, one of our goals was to invest more time and effort into front end performance and adopt a strategy for making the site run fast. We hope to build a performance first culture, and going forward we would like our engineers to keep page speed in mind as we develop new features. Not only is page speed vital for a great user experience, but it is also becoming more important than ever with Google announcing that in May 2021 they will start taking it into account in their search ranking algorithm.

Speedcurve dashboard

Before we were able to start making improvements in this area, we needed the right tooling in place to track and monitor any changes we planned to make. One of the tools we adopted to track this performance was Speedcurve. Speedcurve is an online tool to visually monitor the front end performance of a site across different browsers and platforms. It allows us to create custom dashboards that can track a wide range of useful metrics. Furthermore, we can create performance budgets for any of these metrics, and send out alerts using webhooks when any of the budgets are exceeded to notify us of any performance degradations. In this way, it is easy to see the impact of any changes that we make, and fix them promptly should they negatively affect page speed.

Synthetic or Real User Monitoring?

Speedcurve offers two approaches to measuring site performance. The first is synthetic testing which aims to minimise the different external factors that can affect page speed eg. device, location, time of day. Synthetic tests are built on top of WebPageTest, considered to be one of the best synthetic testing tools available. Synthetic tests are run from the same real device, location, connection speed and are carried out at fixed intervals. This strategy is useful for pinpointing the effect that new code can have on performance. It is also useful to track the perceived performance a user experiences the first time they visit the site, which is what the page ranking algorithm is based upon.

On the other hand, real user monitoring samples actual visitors to the site, across a variety of devices and locations. This strategy is beneficial in that you gather a large amount of user data, and is invaluable if you want to correlate performance metrics with user behaviour. Likewise, if you are running A/B experiments in production, having a large sample size will give us better insight into how those experiments are performing.

So which strategy is the best one to choose? Well, the two are complementary and can be used side by side to gain invaluable insights into your site’s performance, with real user monitoring giving a wide-angle view of page speed and synthetic testing providing the detail needed to dive in and make improvements.

Custom dashboards

Speedcurve already comes with a lot of useful built-in dashboards, and these are worth exploring. It also offers the ability to create custom dashboards and choose from a wide range of metrics. One really useful metric to track is the Lighthouse performance score. Lighthouse is an open-source, automated tool developed by Google to audit the performance of a site, and the Lighthouse score is a weighted average score that comprises 6 useful metrics that Google researchers have deemed to have the biggest impact on user-perceived performance. Moreover, three of these metrics (Cumulative Layout Shift, First Input Delay and Largest Contentful Paint) also known as Core Web Vitals will be used as a determining factor in the new Google search ranking algorithm in May 2021. The Lighthouse score is a good starting point to understand your site’s performance, and we created a custom dashboard to track this against some of the most common devices and browsers that use our site.

Lighthouse score dashboard tracked over critical pages

For a more detailed view of how pages are performing, we have extra dashboards where we monitor some useful metrics such as speed index, time to first byte, JS content size and the time a page takes to become visually complete.

Which browsers & devices?

We analysed our Google Analytics data to determine which browsers and devices to test against. For mobile testing, we chose two mid-range devices with 3G connections to ensure our site is running well for less powerful devices. Based on this, our synthetic tests are carried out using the following devices

  • Chrome Desktop (Cable connection)
  • Nexus 6 (3G connection)
  • iPhone 5 (3G connection)

Performance budgets

Once we had decided which metrics we wished to track against, the next step was to set a budget for each metric. This step is important because it provides a base benchmark for what we determine is an acceptable level for our various metrics. Performance budgets are easy to set up in Speedcurve, and the tool provides a dedicated view for these where each budget can be monitored on a dashboard. Any budgets that are exceeded are automatically flagged as red, enabling us to quickly see what areas of the site are suffering.

Status page showing performance budgets

Monitoring

Dedicated dashboards and the ability to view how your site is performing is great, but it would be much better if we can know immediately when site performance begins to drop and act accordingly. For this, we set up alerts for each of our performance budgets which then notify all interested parties on a private Slack channel. Once again, Speedcurve makes this process straightforward to set up.

Setting up alerts on Speedcurve

Synthetic tests can occasionally return variable results and we do not want to want to be alerted every time a single test exceeds a performance budget. There could be varying reasons why that might happen that cause an inconsistent result, for example, the amount of traffic on the site. We are more interested in how our site performs over time. For this reason, we can set a threshold of the number of times a budget is exceeded before we get notified about it. This results in less noise from the notifications making them easier to work with.

Next steps

With the right tooling in place, we had a good starting point to begin making improvements to the site performance. This is the only beginning of the journey, however. The next steps are to identify the current performance bottlenecks and prioritise the issues that can offer the best improvements, and Speedcurve will be a valuable asset to do this.

--

--