Progressive rendering example from

Developers: What you should know about web performance

Christophe Limpalair
8 min readDec 24, 2015

What’s the difference between fast websites and slow ones?

Is there one correct answer?

No, unfortunately there isn’t. That’s because websites have many different pieces and each of those pieces have the potential to slow things down. So instead of just giving you a list of things you need to do, this post is going to explain how certain pieces slow things down, and what you can do about it.

Like this great proverb says:

Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime

Instead of just telling you to add an “async” tag to your scripts, or to lay out your pages in a specific order, I’m going to explain why those changes make a difference. That way, you can play around with your apps and see what works for you.

These tips, by the way, come from a fantastic conversation I had with Ilya Grigorik.

Who is Ilya Grigorik? He’s the co-chair of the W3C Web Performance Working Group, and a Google web performance engineer. Yeah, he knows a thing or two about performance.

“One thing everyone should do to speed up page loads”

Like I just mentioned, there is no such thing. The web is a bit more complicated than that. Plus, reducing the speed at which your pages load may not be the right metric for you to focus on! (We’ll get to this in a moment)

However, there are a few things which are really important and usually make a noticeable difference. You may have seen these mentioned before, but you may not understand why they are important.

  1. Compression

Delivering HTML, CSS, and JavaScript with gzip compression reduces the number of bytes that have to travel across the wire. This can significantly reduce the time it takes to download assets. Since the browser needs HTML and CSS to render a page, we want these resources downloaded as fast as possible.

2. Optimizing images

I was recently chatting with a friend who develops WordPress websites for clients who often upload a lot of images. One problem he has is that those clients just take an image from their camera and upload it directly to their website.

Images from mobile phones can be well over 1 MB. Even if you re-size with CSS, you’re still loading a very heavy image across the wire. Users on slower networks are going to sit there a while.

Thankfully, there are ways to fix this. This episode from my show is all about optimizing images. I highly recommend you check it out if you haven’t already.

3. Don’t transfer what you don’t need

Look at scripts and CSS files on your different pages and ask yourself whether those files are actually needed for that page or not. You’ll probably find some things that were added a while back and never removed.

Plugins are really bad about this. I go on quite a few WordPress websites that load a dozen CSS files, and half of them aren’t even used on that particular page! Many non-WordPress sites also do this. Shoot, I checked earlier and I was loading an unnecessary script on a few pages of

It can be scary to get rid of scripts or stylesheets. What if it actually is needed on that page, but you just can’t remember? There are a few tools that can help with this. One of which is DevTools (under Audits).

Can you spot a common theme between these optimizations? They all reduce the number of bytes we need to transfer.

Progressive rendering

You want to give HTML bytes to the browser as soon as possible.

For example: A request comes in and (ideally) your data is cached so your server can stream it back quickly. Then, the browser can start parsing that data to display something on the screen.

Now, I mentioned earlier that page load time may not be the performance metric you need to focus on, thanks to progressive rendering.

Progressive Rendering (Source)

A page can be a little bit heavier, but as long as you’re showing users something in a short amount of time (like under a second), they’ll still feel like it is fast.

Amazon does a good job of this:

Amazon progressive rendering

In this particular WebPage test, we get the first paint at 1.5s, but as you can see, it doesn’t include everything. It includes enough for you to start navigating around or looking at Holiday deals.

Then, at 3.5s, another section loaded with more deals. At 6.5s, things were still loading! In fact, the full page load didn’t finish until 18 seconds. Would you wait around that long? I doubt it!

If Amazon was focusing on having the lowest page load possible, someone would surely be fired. Instead, they focus on sending back the most important bytes in the earliest packets. Going one step further, they probably cram the most important bytes in the first packet. And my bet is they also focus on sending you those bytes as soon as possible.

That’s where the Time to First Byte (TTFB) comes in.

When the browser sends a request for a page, it sits there waiting for a response. The TTFB represents how long it took to receive the very first response byte. This time doesn’t just represent how long it took for your server to generate a response, but also the time it took to travel across the wire.

This image has a pretty fast TTFB. If you go around the web and look at different TTFB times, you’re likely to see times like this:

Why is this the case, and how do we minimize this time? Should you even focus on optimizing it? Great questions, and I’ve got more info on that here.

Also, Steve Souders gave a great talk on what performance metrics to measure, if you’re interested in learning more about that. Page load time isn’t always the best metric.

What else can make content appear faster?

Now that we have a faster TTFB, is everything going to display lightning fast? Not yet. Let’s talk about the Critical Render Path.

The Critical Render Path is the sequence of steps your browser has to take in order to get the HTML, construct the DOM, get the style information, execute critical JavaScript, and then paint the content.

Whew. That’s a lot of work.

It is, and that’s why we need to keep a close eye on it. The more HTML, CSS, and JavaScript that you have, the longer this can take. That’s why you’ve been told to add an async tag when loading JavaScript files.

You see, when the browser runs into JavaScript, it can’t possibly know whether that JS is going to alter the DOM or not. So, it has to assume that it will, and it blocks rendering until the JavaScript finishes executing. By adding the async tag, you promise the browser that it’s not critical to the page, and so the browser keeps on rendering without waiting for the JS to execute.

Does that mean you shouldn’t async scripts that modify your page? Maybe. Often, though, even if you async scripts that modify the page, it is still functional from a user’s perspective. They can see the content and they can start interacting with it. Sure, some actions may not be available yet, but perhaps that can wait a little bit longer. Depends on the app, of course, so experiment with this and see if it works for your needs.

This critical path is also why it’s so important to receive our bytes as soon as possible, because the earlier you can start the entire process, the sooner it will finish. Here’s a little bit more on optimizing the critical rendering path.

How can you measure if using async (or other optimizations) is beneficial to your app or hurting it?

One nice tool to measure this is the WebPageTest. You can get all kinds of useful information, including the film strip view. Film strip view is what I used to show the visual progress for Amazon’s page. You can do the same for your website and compare side-by-size with and without async.

More recently, DevTools has implemented its own film view.

Open Chrome’s DevTools and go to Performance -> Enable Screenshots -> Reload your page.

You’ll see screenshots of your page load progress. Nice, right?

So now, here’s what you can do:

  1. Toggle your network speed (remember, not everyone has super fast internet)
  2. Take a look at the film strip
  3. Change your scripts to async (or make other changes)
  4. Compare film strips
You can throttle your network in DevTools

Another tool I recently stumbled upon is SpeedCurve. It was created by two bright minds: Mark Zeman and Steve Souders, and looks very helpful.

DevTools can be overwhelming. How can we get a better understanding of how to use it?

Complication is an unfortunate side effect of adding so much functionality.

What better way to learn than to look at examples and then practice? Paul Lewis and others walk through how to use DevTools on real websites. Here’s another good example on fixing scroll performance issues.

More info

This is just a short summary of the full interview, where we go into a lot more detail and cover more important topics (like how HTTP/2 is different and whether we should still minify and concatenate or not).

You can read the full summary and/or listen to the interview here. Here’s the video format if you prefer:

Connect with Ilya on Twitter and read more about performance on his website.

Connect with me on Twitter and keep up with my newest content on ScaleYourCode.

Thanks for reading! If you learned anything from this, please hit the recommend button below so more people can learn :)



Christophe Limpalair

Helped build 2 startups to acquisition in 5 years: ScaleYourCode (Founder) and Linux Academy. Now building Cybr, an online cybersecurity training platform