How we got into the TOP 3 of the fastest e-commerce websites 🛍🛒

We reduced the load time of our homepage by 4 seconds and achieved 2.3 seconds on average for perceptible load times on the sub pages.

Ingo Stöcker
idealo Tech Blog
8 min readAug 21, 2018

--

Page load times, the internet will tell you, are essential to the success of any e-commerce website including idealo — Germany’s leading price comparison platform. In the springtime of this year, our SEO team surprised us with the news that we had fallen behind our competitors. In fact, we had slipped to the bottom of a list of competitor websites that the team was monitoring for page load times every few months.

What did we do? Well, we decided to change this. Our main objective for the upcoming quarter was to achieve an improvement in page load time that would put us back again into the top 3 positions. At the end of the quarter, we’ve succeeded. In this blog post, we’d like to share our experiences on a few measures that we tried and really worked out well and also on measures that didn’t have any effect at all.

Choosing the right people for the job

But before we could actually start with the project, we had to assemble the right team for the job and kick it off effectively. Apart from the apparent choices like front-end and back-end developers, we also included SEO experts, Product Owners and an Agile Coach in the initial workshop. The goal was to collect already existing data about our page and users. Then we generated ideas on next actions that we can take that would have a high impact on our goal. The result was a list of items that we grouped based on likely effect and effort.

Although idealo embraces an agile working environment throughout the entire company, different teams can work differently depending on their preferences and team setups. Having such a diverse task force for our goal meant that we had to find a working mode that would work for all, regardless whether they followed the Scrum, Kanban or another approach in their respective teams.

We decided to have daily standups with the whole team (~10) and a physical board with stories that was the output of our workshop where everyone could pick something out to work on without a lengthy grooming or pointing overhead. The physical board, a custom Jira board and our monitoring dashboard which we’ll talk about in the next section really helped us to track, evaluate and discuss our work efficiently.

Knowing what to measure is key

While our objective was to be among the TOP 3 of the fastest competitor’s ecommerce sites, we first had to define our main metrics and its benchmarks. Based on competitor analysis we defined key results that we needed to achieve to hit our goal.

Key Results

  • time to first byte < 600ms
  • first paint < 1.1s
  • time to interactive < 2.5s
  • page fully loaded < 4s
  • SpeedIndex < 1500

Measuring

Before we started to measure we could rely on some existing backend performance tests. We added tests for the remaining page types to receive better feedback on server side metrics — before every deployment. The next step was to create a baseline for the load times of the pages we have been focussing on. After some research we decided to use the open source tools from sitespeed.io as our source of truth.

Sitespeed.io provides docker images which can be deployed for instance on AWS. We’ve learned that it’s important to measure from external locations that go through the whole network infrastructure since that simulates a real world user experience. We then deployed our measuring infrastructure on the AWS Frankfurt/Main region (eu-central-1).

Out of the box sitespeed.io offers a nice monitoring solutions via Grafana’s customizable dashboards. We collected several page examples for each specific page type that we focus on and configured multiple measure points. Additionally we used the annotation feature of graphite to visualize external events like feature toggles and deployments. This led to a better understanding of the graph trends in terms of cause and effect.

At the end of the day we got a powerful tool which gave us an overview over time and the opportunity to verify our improvements instantly. In addition to the automated metrics collection we started running manual webpagetest.org comparisons with our main competitors on a weekly basis. We also generated weekly video comparisons with competitors to see the visual changes from the user’s perspective over time. Furthermore we also triggered lighthouse reports (benchmark tool from Google) after each deployment for every page type.

Backend — hunt for the first byte

We investigated bottlenecks. Showing that some of our services had slower response times as expected which made the page composition slower than needed. Recognizing this, we initially didn’t have a leverage for a faster initial server response. Therefore, the developers analyzed modules of the pages that could be cached completely or be cached with a better strategy. By doing this, we also learned that there were even caches for cached content, which obviously made no sense but came in place unnoticed over time. By removing those unnecessary caches we were able to also reduce complexity.

While profiling the composition of our page data, we saw that several parts of the pages could be processed in parallel.

Frontend — the critical path

Javascript — beforehand we had five JS requests consisting of three webpack JS bundles in total (client error tracker, vendors + page logic) and two Grunt concat JS files (some third party libraries + legacy page code, real user monitoring).

We decided to load the JS bundles for client error tracking (into <head>) and real user monitoring (page bottom) asynchronously because they had a lower relevance for the page rendering. We also split the page wide bundles and files into sub-page based bundles to only load the required code for each page type. Next we moved some vendor libraries into theses bundles which were only required by some page types. To reduce the payload of the JS modules we also added bundles for modern browsers which support ES6 modules. They are almost 10 kb lighter on average due to less transpiled code.

CSS — after splitting the JS modules we researched for faster browser rendering and painting. We already had a page wide “above-the-fold” CSS which is embedded inline into <head> and we loaded the complete CSS bundle asynchronously at the page bottom. Furthermore there was a cookie based toggle so that only the very first user request hits this flow. Otherwise we load the CSS synchronously into <head>.

We removed the cookie toggle to reduce this kind of complexity. The current “above-the-fold” CSS was way bigger than the 14 kb (gzipped) recommendation. That is why, we had to split it into page based bundles. Finally our above-the-fold CSS was way lighter but still a bit more than 14 kb because we had too many dependencies between some CSS modules.

Additionally we splitted the “below-the-fold” CSS per page. Afterwards we improved the embedding of these CSS bundles which was referenced asynchronously and blocked the rendering process. We switched to preloading for the “below-the-fold” CSS bundles to accomplish a non-blocking approach. To meet our browser support policy, we had to add a polyfill (inline JS). To avoid layout flickering during rendering our icon font we also preloaded the WOFF font file.

Lazy-Loading — another big topic was the embedding of images. Beforehand we had several methods of lazy loading for each page module. We focussed on improving the image load of the result list of the category and search pages.

Some of our competitors don’t use lazy loading at all. After profiling our pages it was clear that there is a huge gap between the first meaningful paint and the painting of the images — the specific JS module was loaded later on the rendering process. We removed this gap by referencing the “above-the-fold” images directly and applied this to the first 12 product images. This rule was right for every supported viewport (two, three und four tiles per row). Nevertheless we kept our JS logic for “below-the-fold” images.

Another big topic was the embedding of layout images. We followed many advices to inline those images especially for “above the fold”. The sum of all those adjustments on CSS and JavaScript dropped down our times to first paint and time to interactive.

GoogleTagManager

One of the largest impacts on load times for us are tracking pixels. idealo uses the GoogleTagManager (GTM) to manage and include required tags like Google Analytics.

Google recommends to insert it on the top of the HTML body. Some of our competitors choose a different approach and include the GTM at the page bottom. We started with a toggle based test to check if there is any impact on time to first paint and time to interactive. Our hypothesis was that the page load times shouldn’t change at all, however, there were some concerns about missed user-tracking. With the toggle you have the opportunity to switch on runtime and quickly disable it again if you observe negative effects. The Product Owners managed the time frame between all stakeholders of the GTM positioning test. The result was as expected. Metrics for “fully loaded” events were equal, no significant drop of tracking but a positive influence on the metrics mentioned before.

Besides there were also some other adjustments on the GTM. Our Google Tag Manager developers removed obsolete code and move triggers of less important tags from “DOM ready” to “page loaded”.

One more thing(s)

Because we were timely limited to a quarter there are still some more things to do.

Backend

  • use http2 for server rendered HTML pages

Frontend

  • just load user relevant content initially

JavaScript

  • move legacy JS modules to webpack to gain from features like “tree shaking”
  • get rid of homebrew RUM (JS file)
  • load JS modules when needed
  • page wide lazy loading of images by IntersectionObserver

CSS

  • refactoring of CSS to have “scoped” modules
  • get rid of icon font => inline SVG

Hopefully, you liked this article. If you found this article useful, give me a high five 👏🏻 so others can find it too, and share it with your friends. Follow me here on Medium (Ingo Stöcker) or on Twitter (@Kobe) to stay up-to-date with my work. Thanks for reading!

Thanks to Dat Tran and Stefan Willuda for your support!

--

--