How to build a performance culture at scale

FARFETCH Technology
10 min readSep 13, 2019


By Manuel Garcia, Principal Engineer.

This post was originally published on our F-Tech Blog. Come check it out here :-)

Here at Farfetch, we are always striving to be at the top of our game. In a web application where everything counts to provide the best possible experience, we understand that performance is a crucial factor in engaging our users and boosting conversion rates.

We felt in the last few years that we needed to take a step forward. We were not trying to simply improve the site speed. Our objective was far more ambitious. We wanted to gather everyone, not only engineering, around performance. We wanted to hear the term performance daily. We wanted a performance culture.

To achieve this, we realised that it couldn’t be a single person’s effort. We had to gather expertise from different areas. Hence we created a team of experts from four different areas in the company: engineering, infrastructure, architecture and product.

We called this team “Performance Matters”. We wouldn’t centralise performance concerns within this team, but it would drive our performance culture and push it forward.

As with any new team, we had questions that we wanted to answer to elevate our game. Our three main questions were:

  • Which performance metrics should we use?
  • How to prevent performance regressions?
  • How to create awareness around performance across the company?

We had a rationale behind them. We first needed a way to track progress, hence the metrics. Afterwards, even before we started to improve, we wanted to understand how we could spot regressions because we didn’t want to backslide to the same place six months later. These two questions were like the groundwork for the third one, where we would be engaging with the community and spreading performance as part of our lifestyle.

Which performance metrics should we use?

To answer this question, we selected Start Render, Speed Index and Time To Interactive (TTI). To review our rationale behind each metric, please reference this series of posts: The need for speed — measuring today’s web performance.

We started tracking these metrics with Synthetic and Real User Monitoring (RUM). Unfortunately, not all metrics are available in RUM. We have fewer options in RUM because we can only rely on JavaScript and DOM APIs to extract data. Our current set up monitors the following metrics:

  • Start Render (Synthetic).
  • Speed Index (Synthetic).
  • Page Load Time (Synthetic and RUM).
  • Time To Interactive (Synthetic and RUM) — the RUM version is not as rich as the Synthetic one because JavaScript simply does not have access to the number of in-flight network requests available to a synthetic testing tool.

Our weekly performance report was entirely based on RUM Page Load Time (PLT).

We enhanced it by bringing in all of the other metrics.

We are also planning to add TTI on RUM to have more field data.

This mix of different metrics provides our stakeholders with greater insight about the quality of our users’ experience.

How to prevent performance regressions?

Performance budgets are the answer! A performance budget is a framework that enables us to apply and measure shared, quantifiable standards of site performance to ensure that our site is delivering an excellent user experience. It establishes a shared culture of enthusiasm for improving and keeping our performance in a healthy state.

Performance budgets have an even more significant impact when we can correlate performance numbers with business numbers. They provide stakeholders with meaningful metrics to point to when justifying investments being made. They also provide an objective framework for identifying which source code changes help or hurt our users’ experience. As the name implies, performance budgets provide self-imposed thresholds that help us understand if we can afford the impact of our proposed code changes.

Because of their importance to our business operations, we want:

To validate the budget, we need tools to enforce their thresholds. We use different tools for different environments. The sooner we can detect a regression, the better.

Here are examples of the different environments and tools we can use to validate performance budgets:

  • Local environment — the most convenient environment to do a quick test even during development.
  • Bundlesize
  • Lighthouse
  • CI/CD — the most critical place where not to miss performance budgets.
  • WebPageTest (private instances)
  • Lighthouse
  • Live — validate if internal tests prevented any live regression.
  • Speedcurve

In addition to these tools, we need metrics to express our budgets. Here are the various types we can use:

  • Time-based metrics (e.g. Time To First Byte, Load time, Start Render, TTI).
  • Easier to understand.
  • Easier to monitor.
  • Quantity-based metrics (e.g. total number of requests, Overall page weight, Total image weight).
  • Easier to understand the impact.
  • Can be a lot more stable.
  • They lack a direct, causative link to the user experience.
  • Rule-based metrics (e.g. WebPageTest score, Lighthouse score).
  • Drive us towards best practices.
  • Can summarise the technical quality of pages.
  • No direct relationship with the user experience.

Besides these metrics, we also need informed estimates about the audience of the site, including which devices and networks are the most representative. Let’s consider that our typical user uses a powerful device with a fast connection. This can establish a baseline for a performance budget. However, we also want to look into the long tail of performance where we test worst-case scenarios. Efforts to improve slower scenarios would also improve faster ones as well. We can have budgets for both cases, or more, as long as they provide value.

Combining all of the above considerations can start to shape a performance budget that might look something like the following:

  • “The server must respond under 300 milliseconds”- time-based.
  • “Our product page must load under 3 seconds using an iPhone 8 with a 4G connection” — time-based.
  • “Our listing page should load no more than 3 MB” — quantity-based.
  • “Our homepage needs a minimum of 90 points on Lighthouse” — rule-based.

Typically, time-based metrics are used more often because they are simple to explain to all kinds of stakeholders. We have time-based metrics in our live performance budgets, using SpeedCurve. We love how SpeedCurve addresses performance budgets which have been helping us achieve our goals. Here is a screenshot of our current dashboard that we have displayed in some of our screens at the office:

In the case illustrated above, our live performance budgets are applied to our most visited pages. Each column is a different page and each row is a different metric. This enables us to evaluate each metric relative to its performance budget. Green means under budget and red means over budget.

If you are looking for a standard for setting a performance budget, Google provides some guidance.

Achieving a TTI of 5 seconds on a slow 3G connection with a mid-range Android phone is very ambitious. You would have to generate a critical bundle of around 170 KB (minified and gzipped) with everything included: first-party code, framework and tools. This forces you to question every single thing you add to a page, which is a good practice.

How to create awareness around performance across the company?

We created the above image a long time ago. It still reflects our ideals and all the different areas that we needed to work on. One of these areas is our Performance culture. This requires a lot of socialization and connecting many individuals to their contributions toward our overall performance goals. This is definitely the hardest part because there are often many interests and objectives at odds with each other, and performance is just one of them. How do we connect the dots and engage a true culture of performance?

Just talk with people

A culture is made of people, and people will always be the key. We wanted to break the cycle of performance being a “tech only” concern, something that is only owned by the engineering team to deal with and fix. This mindset needs to change for a true performance culture to set in.

Breaking this mindset takes time, as with everything else in performance. We need to sit down with many people, repeat a lot of things and persevere. Not everyone understands the impact of implementing a certain feature. Would it be acceptable if that slows down the site by half a second?

Connecting performance metrics with business metrics is mandatory for a mature performance culture. Business drives the company and connecting its success to speed metrics encourages a broader set of stakeholders to understand and navigate the trade-offs.

  • Product owners drive the functional evolution of the product, often represented by roadmaps of features to implement. Not every product owner understands that a certain feature may have a negative impact on performance. If they don’t, the engineering team should highlight the risks as soon as possible. Product owners must be aware of any impacts to make more holistic product decisions informed by any trade-offs. This way they become part of the solution and not the problem. Performance budgets help guide them in understanding if a specific feature is worth it or not.
  • Designers may not understand how the bootstrap of an application works, and they shouldn’t need to. However, they can design the application with a performance mindset by understanding that not all bytes are created equal. This is particularly true for mobile device experiences. The designs should consider the user journey of loading the application. A designer that understands the critical rendering path will design better user experiences.
  • Marketing can be busy trying to bring more traffic to the site and configuring ad campaigns, for example. Third-party providers may also claim that they bring no performance overhead. Engineering should work closely with marketing to verify these claims. Creating a strong governance process for third parties can protect many of the standards that are important for the company, such as performance.

In the end, this should be a collaborative process where no “dead weight” is dropped on another team and everyone acknowledges that the speed of the site is a shared responsibility.

Be accountable

Setting goals, being public with them and reporting progress are all tools of an accountable environment. This engages the community and motivates others to be part of this journey.

We have performance goals every year, but we realised that we were lacking a long-term vision for performance. We set our eyes on TTI as our prize because we struggle with TTI due to all the JavaScript that is executed on the page (first- and third-party), delaying a fluid interactivity for our users.

For our long-term goal, we aimed for a 50% TTI reduction. We also created a number of short-term goals to contribute to our grander vision.

As mentioned above, we also create weekly performance reports to show progress on our metrics. Every two months we create a deck containing all the highlights of our progress in addition to our next planned actions. There’s always something to look forward to.

Think global

We need to reach a lot of people that work on our site, people that may be working in different offices around the world. It is impossible to sit down with everyone and provide insight on everything that matters. This is why we have a dedicated online space for performance. There we share all of the completed work, goals, reports, and resources such as guidelines and best practices.

We also present tech talks internally, and externally, to keep the rhythm going.

Celebrate wins

By being a multidisciplinary team, we can create momentum in different areas that contribute to an improved overall performance and experience. As we accumulate new performance wins, we’ve identified the need to celebrate our shared successes. This encourages our culture to recognise our progress, sustain our motivation, and reinforce our momentum.

Thus we bring a strong performance culture to life by creating this loop:

We aren’t yet at the maturity level we want, but we are making efforts to get there. We know that a performance culture is not something that is created overnight; it takes time. Nevertheless, we believe the investment of time and energy is completely worth it.

It’s like performance itself: it’s never truly done, which is another reason why we should enjoy the ride while we are at it.