Caring for the Performance of our Supplier Applications — A Case Study

Ayush Koshta
Meesho Tech
Published in
4 min readDec 14, 2021

Our Vision Statement: ‘Enable 100 million small businesses to succeed online.’

Helping suppliers to sell across Bharat is one of Meesho’s core propositions. Our prevalence in Tier 2 and below markets makes us a unique internet commerce company. Helping navigate a tech agnostic audience is critical to our success.

When this is coupled with poor internet infrastructure, it’s all the more important to build solutions that factor in inherent systemic problems.

Our secret sauce — we thrive in the most challenging network conditions, and unfamiliar environments. Read: Tier 2/3/4 cities. This is the real India, where more than 75% live.

There are a number of things we do to help our suppliers — this is one such story.

Problem statement

Every supplier has a ‘web application’ to track payments, orders, catalogue etc. We call this the ‘Supplier Panel’. Measuring the performance (load time, visual stability, web server responsiveness etc) of these millions of pages is critical to lessening the burden on suppliers.

Since these platforms are password-protected (think: your personal Facebook profile), traditional performance monitoring tools do not work. We built a custom monitoring solution to measure the performance of our ‘supplier panels’. We used Google’s ‘Web Vitals’ to orchestrate our own platform.

What are Web Vitals?

Remember Google’s secret algorithm?
The super-secret one? 😝
The one hundreds of ‘growth’ marketers are trying to optimise for? 😉 That.

Google launched Web Vitals for folks to understand how they measure every website to help end-users. There are a plethora of tools to analyse Web Vitals, each having its own set of use cases, benefits, and downsides.

We wanted to look at the most important Web Vitals and help improve the user experience for our suppliers. These pivotal metrics are called ‘Core Web Vitals’.

What are Core Web Vitals?

  • Largest Contentful Paint (LCP): It measures loading performance (time taken for a page to load);
  • First Input Delay (FID): It measures interactivity (When the content starts appearing for consumption);
  • Cumulative Layout Shift (CLS): It measures visual stability (how the content+design syncs while loading).

The tools needed to measure this can be categorised into two types:

  • Lab data: Lab data tools compute the score in a simulated environment where the network & surroundings can be predefined;
  • Field data: Field data would gather information from the site’s real users and will vary depending on their environment and network circumstances.

How did we go about measuring performance?

We started off by using lab data monitoring tools. However, this did not provide us with a true picture of how effectively the app worked for our end users.

It’s a fantastic method to resolve low scores, but we’d have to figure out how to access the actual user data first.

Here are things we did, but failed at:

  • Firebase — Was a good starting point, but it didn’t provide us with all of the Web Vitals;
  • Google Analytics — The data was sampled, and didn’t provide us with a comprehensive picture;
  • Google Data Studio — Was ineffective because of the sheer volume of data generated.

FINALLY, we decided to develop our own dashboard to monitor the performance using Google Analytics’ APIs 🛠

What worked for us?

  • We utilised Code Splitting to divide down our main bundle into smaller sections and load just the necessary components at first.
  • This helped us load our application much faster as we were not loading the entire chunk at first. Since our suppliers use the application in tier 2-3 cities with varying network conditions, this helped a lot.
    With this, we managed to break the main bundle of around 600kB to an initial chunk of 120kB — we now load only the necessary details.
  • To decrease code size, we serve our static resources by leveraging brotli compression.
    By virtue of less code being shipped, this improved page load times (We cut down nearly 18% of the code shipped!).
  • Making usage of HTTP2, which provides multiplexing benefits:
    Allowing the browser to load multiple requests, allowing a faster, concurrent load time for all-important web assets.
  • To minimise the abrupt experience of the content being all over the place, images not syncing properly, and the screen shimmering weirdly instead of displaying everything.
    This helped us to improve our CLS score enormously.
  • Using Observers we were lazy-loading of images, videos.

Our ambitions for performance measurement:

We took a month to incrementally do this task. It was data-driven, analytical and carefully done to help our suppliers in poor network conditions. Our aim is to progressively expand on this and add additional features depending on our suppliers’ use cases.

Watch this space more for the impact we’ve created for suppliers in tier 2 and below cities. Next time, I’ll have more data insights into the project and the critical user experiences that have shaped the trajectory of this product.

--

--

Ayush Koshta
Meesho Tech

Software developer by profession | Techno Utopian | Amateur Photographer