ManoMano Web Performance journey

Stephane Biancotto
ManoMano Tech team
6 min readMar 28, 2022

--

From blind spot to broad perspective.

Make the web faster, Web vitals, Technical SEO, How we improved our conversion rate to X%, in today’s world, we all have heard about this statements and vocabulary.

At ManoMano, this topic is obviously central when you run a marketplace that gather 50 millions visitors a month, in 6 countries.

Through a series of articles, I will present how we have brought performance back to the heart of the company strategies, moving from a blind vision to a less and less blurry one over the months.

Where did we start from?

When I arrived at ManoMano in May 2021, the performance was in mind and with some metrics, but at the end nothing was really mastered or used by the teams.

The initial step to improve web performance in a “hypergrowth” environment is to set up a culture around it.

Our first lever was to play with the competitive instinct. In France there is a reference ranking updated each month which is the positioning of the e-commerce websites from the JDN (Journal du net). This ranking being monthly, we decided to reproduce the calculation method internally and we created a dashboard that allowed our employees to follow the estimated position daily.

This daily ranking brings two benefits:

  1. Incentive the teams to try to gain positions in the next official monthly ranking
  2. Share a daily overview of ​​our positioning and thus to prioritize accordingly the improvements
ManoMano.fr ranking with web vitals metrics

Once this French ranking has been reproduced, we were also able to apply it to all other websites which represent all countries where ManoMano is present (Germany, Italy, Spain, United Kingdom).

This ranking uses the Chrome UX report API (CrUX) to retrieve the Core Web Vitals on the origin (the domain). If you are interested about how to use it, read “How we retrieve performance metrics from public API” article.

From blind.

In web performance we have 2 types of metrics, Synthetics (Lab) and fields (Real User Monitoring):

  • Lab data are collected from a controlled environment with predefined settings
  • Real User Monitoring is collected from real visitors on your web site

The next step was to collect data for a precise vision. The appropriate tool for this is Lighthouse. It is a very powerful `Lab` test tool. With 59 different performance audits, it allows you to deep dive and have a finer view of what is going on. It’s available through pagespeed API, read this article to know how we used it.

The most viewed pages on ManoMano sites and therefore the ones that need to be optimized on an e-commerce website in priority are category pages and product pages.

So we started by building dashboards with performance indicators on about ten metrics such as: boot up time, mainthread break down, server response time, and long tasks over 50ms (this specific limit is used for First Input Delay).

Total blocking time relevant values

Thanks to these first indicators, we initially managed to observe variations, allowing us to detect regressions and locate them more precisely. We thus obtained a strong gain in terms of responsiveness, compared to the metrics we were monitoring. CrUX’s CoreWebVitals, these being average values ​​over 28 rolling days.

With these indicators we have also set up a monitoring of the size of the DOM and the duplicated code which are values ​​which at first glance seem trivial, but are important. The browser spends time parsing HTML and javascript, so useless code has to be removed.

The teams subsequently adopted the new indicators and they became more trustworthy and are now often used to monitor slippages and quickly roll back or fix problems.

Lighthouse also offers an interesting diagnostic block, number of tasks , global, over 10ms, 25ms, 50ms and 100ms, number of scripts, page weight, total task time, number of requests.

Using this, we get another type of view, it’s all in the name, it’s a diagnostic, so not a detailed indicator but a good technical weather.

Lighthouse diagnostic metrics

In web performance everybody knows that not only internal code could be responsible for bottlenecks, but third party scripts could often be very responsible, so we set up dashboards to visualize the times consumed by them and identify the poorest.

dd

We still have 2 indicators to look at, not considered Core Web vital, but not a technical one either. The speed index and time to interactive. These two metrics have no sub-values that can explain variations, but controlling them is a good thing to have a global overview of the website speed.

  • Speed index gives us the information of when the website is visually complete.
  • Time to interactive is the moment where a visitor can interact with the page.

What we want to know is what really happens between time 0 and the speed index.

With the diagnostics we know how much time is consumed in scripts and how many tasks of more than X ms we have. But we would like to know more precisely what is happening.

Next.js gives us 2 first default indicators, the before-hydration and hydration. These indicators use the User Timing API, a useful browser API. We use it to know the rendering time for the components.

Basic User timing API from Next.Js
Custom user Timing API markers

Finally in the dashboards that interest us we have also with lighthouse but this time on RUM (field) data, set up a dashboard with the distribution metrics of the core web vitals in order to see the progressions / regressions of the distribution [good, to improve, poor.] This allows us to know more precisely the distribution in percentage of users for each Web Vitals according to each type of page.

Web vitals distribution visualization by page type.

Once all this is in place, we are aware of the performance of our web pages, we now have an almost real-time vision of many indicators that can be cross-referenced with infrastructure indicators, database load, page synthetics with response times or concurrent user counts via google analytics.

Long journey, lots of investigations. We now have a good basis compared to the blindness we had.

Feature teams, now, have some visibility about what is happening and more importantly know that we are in a way to have performance supervision.

Performance is collaborative work. Listening to what the teams want, will help us to raise the performance culture and create code with performance in mind.

Stay tuned if you want to know the next step: “Blurry vision”.

We ❤️ learning and sharing

I took a lot of pleasure to write this article, feel free to post your feedback below, reach out to me on LinkedIn. Whether you had a similar or totally different experience, I’d love to hear about it.

Oh, and by the way: we are hiring in France and Spain.

--

--