Our company was invited by Google to participate in this year’s Northern Europe Page Speed Race. Goal of this race was to get the highest absolute improvement of Lighthouse Performance Score of your website within two months.
Having fast and engaging mobile web experiences is critical to your user experience and your business. User expectations on mobile are growing constantly, and it’s critical to stay ahead of this demand for speed.
The pages that were part of this race and that we needed to improve were the home-, category overview and product detail page. Where in our case the product page needed the most tender love and care.
Tearing down the requests
The first step we took was taking a look at the requests our pages are making which can easily be done in the Chrome DevTools (or with the sites like WebPageTest). We found out quit easily that we were loading way too much and sometimes even unneeded assets. For example, why would you load an image for a mobile device if it should only show on desktop right? It’s a quick check and you can save many kilobytes with it.
Another thing we found out, it that the order of loading the assets wasn’t correct and it wasn’t benefiting our critical rendering path. After looking at a free Google course we found out the following things:
- CSS is render blocking, you would want to request all CSS files at the same moment in the head of your html. Because when the browsers sees the first link tag with style, it starts creating the CSSOM and only when this is finished the page will render. Most browsers have some smart lookup to see if there are more link tags in the HTML, but the fastest way is to have them bundled at the same place in your code;
Lazy loading the images
While looking at the requests we saw that we already downloaded most of the images, so the most logic next step for us was to implement lazy loading.
We already used the Intersection Observer on our pages for sending tagging to GTM, so we could easily extend this functionality to lazy load the images. For this we made use of the react-intersection-observer package, which makes life a little bit more easier but you could also implement it yourself of course.
After enabling it on our product page we immediately saw a big drop in the image size on the page where we started on 1.21 MB and finished on 122 KB!
We use the webpack-bundle-analyzer to inspect our bundles. This is a very handy tool you can use for optimizing your client-side bundle as you can very easily see which packages are (too) big in size, or which packages shouldn’t belong in the bundle.
Optimizing the CSS
Within our company we use ITCSS, but the product page wasn’t really converted to make use of this yet so this was a good opportunity for us to rewrite, clean up the duplicate CSS and make use of the already existing CSS classes which saved up an extra 13 KB in the end.
As CSS is render blocking, we decided to inline the styles for the base, header and footer which improved the First Contentful Paint greatly. Next steps we are taking is inlining the rest of the critical page styles and creating different stylesheets per device.
Caching serverside service call responses
Our pages makes different calls to different microservices, some personalized, some show realtime data. These services we can’t cache, but we also have service responses that we can cache, some for a minute and some way longer.
We implemented node-cache and for every call we check if we have a cache response, otherwise we will fetch the service. Some services will cache their own response as well, but caching it on the client too will save up the call and the potential latency.
Of course you have to find out what the best caching duration would be if both sides are caching. For example, if both sides are caching for 30 minutes, it could take up to 60 minutes before the content on the website changes.
Upgrade to babel 7 and removed serverside transpiling
This is bit of a technical under the hood improvement. We had a generic Webpack setup which transpiled our serverside code the same way we did the client-side code, but there’s actually no need for that. Why should the serverside code be transpiled to also work in IE 11? Right, it shouldn’t.
The only thing we actually wanted to transpile is the import/export functionality, because in our case rewriting everything to require() would take up a lot of time. Rule of thumb, if you don’t need imports, always go with require on the server. Then there’s no reason anymore to transpile your code.
Together with the upgrade from babel 6 to 7, we trimmed of a lot kilobytes which the server doesn’t have to process anymore! We even had one case were the bundle was reduced from 400kB to 200kB!
The White Zebra rises
After a race of almost two months and with all the improvements we made we came in second with an improvement of our aggregated Lighthouse Performance Score score from 71 to 78.
For this accomplishment we received some very nice words from Google:
Great work making it to the end of the Speed Race with an impressive speed uplift! It was a really close call for 1st place and we really enjoyed your enthusiasm throughout the race. Know that this is still a huge accomplishment amongst the 85 teams across Northern Europe — congrats!
Don’t drop the ball, implement performance budgets
With the race ended that doesn’t mean that we shouldn’t focus on speed and performance anymore, on the contrary it should always be top of mind. That’s why we added performance budgets on the Lighthouse performance, Speed Index and content sizes.
Thank you for taking the time to reading this story! If you enjoyed reading this story, clap for me by clicking the 👏🏻 below so other people will see this here on Medium.
I work at Wehkamp.nl one of the biggest e-commerce companies of 🇳🇱
We do have a Tech blog, check it out and subscribe if you want to read more stories like this one. Or look at our job offers if you are looking for a great job!