Optimizing for Core Web Vitals at Homegate: things we learned along the way

Olga Skurativska
SMG Real Estate Engineering Blog
3 min readMay 4, 2021
Remember the broken windows theory? (Photo by Nomadic Julien on Unsplash)

The vital nudge

Google’s announcement of Core Web Vitals becoming ranking signals in May 2021 has nudged many companies to take a closer look at the speed and the visual stability of their pages. Homegate — a Swiss digital real-estate marketplace — was not an exception.

The newly released isomorphic Vue.js application that powered our real-estate search was not performing well.

At the start of the project engineering teams put a lot of effort into making sure they write performant, accessible and maintainable code. Lighthouse checks were part of our CI pipeline and we were truly committed to not drop our scores below the recommended levels.

So what happened?

The broken window effect

It took integrating a few third-party scripts to feel like we couldn’t trust the Lighthouse measurements anymore.

The lower our score got, the more variability we’ve experienced— on some days Lighthouse score was fluctuating between 0 and 60. There was no way of telling whether the following features we released were implemented well.

Prioritizing the technical SEO work became harder too. It was difficult to argue about the potential impact of each initiative with no numbers to back up our gut feelings.

The many ways to measure

It also didn’t help that different people were looking at the page speed and visual stability from different angles.

We were running our checks on different pages, environments, using different tools, sometimes accidentally comparing mobile scores to desktop ones. Opinions varied on which metric was affected the most and where we should start our improvements.

We desperately needed some alignment and better understanding of the subject.

Turning the ship around

It took us some real cross-discipline collaboration to break the stalemate.

Engineering had to work with product on developing a way to measure performance continuously and reliably; both had to take a deeper dive into the theory behind the individual performance metrics to be able to speak the same language and prioritize improvements.

Things we learned along the way

There’s still a long way ahead of us before we get to green numbers. But here's what we wish we knew all along:

  • If the website performance is bad, chances are there will be no simple fix.
  • It is incredibly difficult and not very rewarding to work on performance after the website is built — it might take a lot of work until you'll be able to see the first improvements reflected in the score
  • It is much cheaper to monitor performance constantly and mitigate problems before they accumulate
  • Engineers are not the only ones responsible for understanding web performance and keeping it at a reasonable level. Tech leads, product and UX have to be well aware to be able to take good informed decisions and prioritize work

If Homegate's story resonates with you, be sure to check the following posts of this series, where we go deeper into details on how we got ourselves out of the stalemate:

--

--

Olga Skurativska
SMG Real Estate Engineering Blog

A Principal Frontend engineer at Homegate that just won’t shut up about JavaScript, serverless, a11y, performance, UX, DX, design systems and tech leadership.