Improving mobile-web performance — a case-study

Andrzej ‘nAndy’ Łukaszewski
Fandom Engineering
Published in
8 min readMay 4, 2020

In 2007 Steve Souders wrote a book High Performance Web Sites which was the first book I read about web performance. Two years later he published another book with a catchy title Even Faster Web Sites which I also read and I recommend both of them as a great source of basics in the web performance world.

However, the first one is really special in my opinion because 10 out of 14 pieces of advice are connected to the frontend. This led to forming the golden rule of web performance which I’d like every web developer to know: “Only 10–20% of the end user response time is spent downloading the HTML document. The other 80–90% is spent downloading all the components in the page”.

With the golden rule of web performance in mind let’s fast-forward to 2017. Most of the websites users have been moving from desktop devices to mobile for the last couple of years. This is when AMP appeared as a solution for even faster mobile web sites. And when we decided to prepare an AMP experiment and compare it with our mobile-wiki. The results of the comparison can be watched in the video below.

Before you watch the video, a small explanation of what mobile-wiki is: it’s our front-end single page application for mobile devices traffic. Single Page Application (SPA) powered by Ember JS with server-side rendering.

2017 webpagetest.org comparison of mobile-wiki without and with AMP

When we ran these tests back in 2017 our mobile-wiki app had first meaningful paint around 1s faster than AMP but to be visually complete it needed more time as there was more to load. Because of how complex AMP versions of our pages were and how many limitations AMP came with, we decided to improve performance of the mobile-wiki instead of continuing the AMP experiment.

Changes and tools

In the first phase of our work we used a lot of webpagetest.org. It helped determine the impact of the code changes we were making, without a big cost of real user monitoring. Its advantage is also the stability of the results. When you run a test in webpagetest.org you pick a location, network connection and device. Removing those three variables and running tests on the same code gives very similar results.

Sites like ours display two types of ads: direct and indirect. The first type of ads uses front-end code we develop. Webpagetest was a great tool to verify how one of the ads impacts web performance and to create guidelines for our designers and sales people:

2017 comparison of the same page with the same ad but different ad’s asset sizes

On a synthetic page (where the same direct ad loaded) without any changes in the code but with smaller assets we ran a webpagetest.org test. The result was a faster loading page and guidelines for the Design Team to keep the assets below 70kB.

There were assets we load on every page which come from 3rd-party domains and we decided to preconnect to those domains. It’s just two lines of code per domain:

An example of preconnect and dns-prefetch

And when we ran the test in 2017 we could see the results on webpagetest.org:

The test results of adding preconnect and dns-prefetch from 2017

We did a couple of other, low-hanging-fruit changes (loading assets from the same domain, removing unused parts of the code etc.) measuring their success with webpagetest.org.

At some point, we found ourselves in a spot where webpagetest.org wasn’t good enough. It was connected with changes to our ad stack. That was the second phase of our project. Its success was only possible if the changes were measured with real-user metrics. But we didn’t have any.

We decided to create one and with every pageview we started sending a small amount of data to our data warehouse. We wanted to optimise visually complete metric on visits from mobile devices but there was no clear metric in the Performance API which we could just send.

We ended up with a proxy: ad_load_time. When the very top ad loads we know the page is visually complete. Once we released the code changes we started tracking the magic number using Jupyter Notebooks , Sroka and pandas.

Plot generated in Jupyter Notebook showing ad_load_time metric over time in 2018, source: internal data warehouse

Ads on our pages are being loaded by Google Page Tag library. We load it on every page as well as we make lots of requests to different ad networks. In order to speed things up, we preconnected also to those domains. We removed one more unnecessary request. We reordered some parts of the code, so ads on mobile started loading faster. GDPR happened, we removed more code. And finally, we removed a huge dependency (AdEngine 2) from the “hack times” which in a huge shortcut was loading ads desktop assets together with ads mobile assets on mobile devices.

#poco or just simply why?

The hashtag #poco is Polish #whatfor (in direct translation) and we use this hashtag quite a lot. Why should we bother and invest in optimising web performance? For an engineer it’s just pure fun and sometimes it’s enough to convince her to work on it ;-)

Yet, we need more if we want to convince our boss. The most logical seems checking the business metrics and how our work impacted the business? We quite often are unable to provide a direct translation of our work to dollars. However, with data in Google Analytics and our data warehouse we can find some proxies to look at.

First one from Google Analytics is an average time on page:

2018 results of average time on a page, source: Google Analytics

It’s hard to tell. You can notice an increase in June but it can also be seasonality. The biggest change was released at the end of June 2018 and we can see it was the time when we had the metric at its best but a couple of days after that it went down.

We can tell that better web performance may create a tendency in our users to spend more time on our page but on the other hand a user can find the information she’s interested in much faster and leave the page sooner. Therefore, average time on page doesn’t seem like the best business metric to verify your hard work on improving web performance.

The more pageviews on your page, the more useful it is for people, right? Sounds like a perfect business metric — does it change with web performance improvements?

2018 pageviews for mobile, source: Google Analytics

If we take a look at it we can also notice an increase in June but it was weeks before our change. It looks more like a change caused not only because of the performance work but a change in the fans’ world, for example a TV series release which made a bigger traffic on our pages.

Bounce Rate is another business metric which basically tells you how many users visit your site by accident or decide to leave a slowly loading page. You want this metric to be as small as possible.

2018 bounce rate metric for mobile, source: Google Analytics

When we look at it we can see that there is quite a drop after 6/26 which continues for next week. It keeps downward trend afterwards! Are our users less eager to close the window because of the faster page? I hope so! The change is statistically significant even though it’s only 1–1.2% difference after the “removal of the big tech-debt” (AdEngine 2).

We can backup with data in our data warehouse and see average pageviews per session. This metric is a deeper look at page views because we look at how many of them were done by the same user. And similarly like with the pageviews overall the more pageviews per session the more useful for users our site is, theoretically.

2018 average pageviews per session for mobile, source: internal data warehouse

Since the beginning of our web performance work the trend is increasing but it doesn’t change drastically with the end of June.

Overall it seems like our work makes sense and yet, the message from our metrics isn’t clear enough, unfortunately. However, my fellow data analysts friends who helped me put together this blog post highlight the fact that user metrics don’t change as fast as technical metrics. By investing in optimisation of web performance you build better relationships with your users. They tend to come back more often to your page as they’re not irritated with its speed. How to see it in the data? Look at the same plots and observe the long-term trend:

2018 average pageview per session for mobile with a trendline, source: internal data warehouse

Other business metrics which are not directly connected to users but may positively reflect our web performance work in the long-term are: the bill for bandwidth and SEO positioning. With a lighter and optimised web page you make the bandwidth smaller and the yearly bill gets smaller as well. Page speed (especially mobile) is said to be a ranking factor in Google Search. SEO is one of the most important business indicators for many websites. This is most likely why Google gathers performance data from user browsers and recommends using Lighthouse for performance evaluation.

There are companies who did the analysis as well and found it beneficial to focus on web performance! You can find lots about it on the internet. For example, web performance optimisation led to search engine traffic and sign-ups increase by 15% for Pinterest ( they wrote about it). Not convincing enough? Do you need a sales example? Authoanything.com noticed 12–13% increase in sales after improving their page load time.

Looking at the analysis above and having the knowledge of how others invest in webperf, I’m thinking it does matter and it’s more visible long-term. Therefore, thinking of it all the time when implementing new features will be the most optimal way of working and improving the web performance of your site.

The work described in this blog-post happened in 2017 and 2018 and it was a great teamwork I will never forget. Similarly, this article wouldn’t be in this shape if not teamwork: Dorota, Martyna, Mech and Wade for help!

Ps. Sometimes, there are also not front-end changes which may require an ad-hoc, cross-team project ;-)

Originally published at https://dev.fandom.com.

--

--

Andrzej ‘nAndy’ Łukaszewski
Fandom Engineering

Engineering team leader and ad engineer at Fandom, Czarodzieje Kodu organizer and Łąki Łan, ukulele and snowboarding fan.