..well that’s how you’ll want to deliver your code, ideally.

The definitive guide for the fastest website

Stolen from https://www.hobo-web.co.uk/your-website-design-should-load-in-4-seconds/

If you look around the internet, you’ll find a lot of guides, checklists, website analyzers, and some fancy comprehensive StackOverflow posts even.

Problem is, a lot of these resources are obsolete and are not up to date with state-of-art optimizations. Browsers and protocols change everyday.

Ever since I started developing CodeBottle, I always aimed for the best performance possible, I always tried to seek the most out of what I had, this led me to get to know about best practices for optimizing a website, and this is my attempt to create the ultimate article for making your website stand out.

Getting your webpages to load quickly is very critical, most users expect a website to load within just 2 seconds, and most would leave if it doesn’t after 4 seconds. Speed is important if you care about how your page ranks on Google, too.


Start with fixing your back-end

Nothing in the world can make your webpage load fast if your back-end responds slowly. The best way to catch up slow-downs is by profiling and benchmarking.

Benchmarking

A benchmark of my API. oh look, sooo fast :)

Benchmarking can be as simple as using ApacheBench or any other tool to test the speed of your code. If benchmarking on a local machine goes slow, then stop benchmarking and start profiling your code.

You should aim for at least less than 150ms for a webpage to load, excluding any network latency (you should run on localhost anyways), 50–100ms is considerably good. Make sure you’re testing with expected concurrency level, though.

Your server responsiveness affects your TTFB (Time to First Byte) that’s shown on some website analyzers

Profiling

Profiling is one important piece to discover performance flaws in your code, there are many tools to help you identify slow parts of your code. Google up the appropriate tool for the technology stack you use.

Interestingly, I’ve seen so many applications in which DB queries account for most of the time spent processing, and thus I highly recommend you start logging your DB queries and see if you can optimize them.

Optimizing the DB

With so many DB technologies, it’s kind of hard to give a list of what could one do to optimize the DB, but broadly speaking, make sure:

  • Your data is indexed for faster access
  • Your queries are efficient (See this for example)
  • You have a good connection between back-end and DB server
  • Your DB server has enough resources
  • You’re creating multiple shards when needed

Don’t forget NoSQL!

Performance comparison between Mongo and SQL, source

NoSQL (aka in-memory cache, or just cache) can drastically affect the performance of your application. Make sure you’re caching frequently accessed stuff, especially those who don’t get updated too frequently. While NoSQL doesn’t provide stable data persistence, it can be much more wise to use it for certain things.

For example if you have playlists in your website, and millions of users are adding/removing to playlists all the time, then it might be wise to store playlists in NoSQL given its nature.

Check your web server’s responsiveness

Yes your web server can be slow if its configured improperly. Apache once caused me 50–100ms slow down in a website of mine until I discovered it’s due to a URL rewriting rule that had a complex regex.


The front-end matters, too.

Now that we’ve covered some important sections of optimizing a back-end, but what about a front-end?

Optimize your JS

Profiling in Firefox

Most modern websites use JavaScript intensively, creating immersive experiences, but all at the account of performance.

I’ve seen so many websites, especially those who rely on frameworks like jQuery, incorporate too many plugins, causing the browser to parse and execute a lot of JS code, drastically increasing the initial load time.

Another interesting issue is the usage of JS to do things that could have been done in CSS. Transitions is a popular example. NEVER DO THAT.

You should also reduce the amount of DOM operations as much as you can. These are very critical. The amount of processing needed for a DOM change can be interestingly huge, and here’s why.

One last important tip is to execute any code that would take more than 16ms on a separate thread. Why 16ms? because that’s a good amount of time to make the user still feel 60fps. Make sure you’re testing on a typical/old machine, your 64 cores machine isn’t necessarily what a typical user has.

At last, there’s a JS profiler built in some typical browsers out there, so be sure to use that to make the best out of your code

Optimize your CSS

Optimizing your CSS is really important. Every time your browser is going to render a component, it’ll go through all of the CSS you have defined.

Reducing the amount of CSS you do have can be a good place to start, some websites do just load unnecessary CSS to view a certain page. Good websites do load different CSS for different pages depending on the requirements, resulting in the best performance possible.

Interestingly, CSS can do a lot of awesome animations, however, it is still slow on most browsers at the time of writing, so it might be good to use GIFs when applicable. (though I’d rather not do that myself..)

Optimizing images

I stole this one from here https://speedcurve.com/blog/web-performance-page-bloat/

Many statistics done on the web show that most of webpage size comes down from images, and most of the time websites just badly optimize images for the web.

So here’s a basic checklist of the things you should check:

  • Don’t use 150MP photos, for now.
  • Don’t use images for things where Font Awesome can replace for example.
  • Do strip out things like unnecessary EXIF information.
  • I so much love websites that do this!

Now here comes the important part

Optimizing the way you deliver resources is the most important optimization of all, in my opinion. Below I am going to talk about a lot of important things that shouldn’t be that hard to get working but can drastically improve the page load times.

Optimize image delivery

Either using a third-party or your own hosted solution, send users only the image of the size they need. For example for 1920x1080 screens you can deliver some photo, while for 1280x768 screens you should deliver a much smaller photo and that would still look crisp on their screen yet with a much smaller size. This technique is quite effective if you have a good share of mobile users.

Minify your resources

Minifying resources can make your loading time waaay faster. Make sure you’re minifying all of your resources, including the HTML you send to browsers, the JSON your API returns, the SVG you use everywhere, and obviously, your JS and CSS.

The JS bundle of my website is 1.6MiB, but only 0.2MiB when minified. CSS also goes down from 1MiB to 0.1MiB.

Use GZIP

GZIP test using https://checkgzipcompression.com/

Yes, just enable that in your web server, you’re done, your website is instantly at least 50% faster.

Teach browsers and proxies to cache

The idea is simple; If you have a CSS file that loads on every page on your website, why not just load it once and for all?

That’s what Cache-Control does, all it takes is to set that HTTP header to a proper value and then everything automagically works.

You should definitely use different cache periods for different types resources. If you have a folder of images, typically, it is not likely for these to change at any time. So maybe cache those for a year.

However, you might want to set your HTML page to no-cache, as that’s likely to change every day maybe.

Some people find problems with setting caches for JS and CSS, as they want to cache it reasonably, while letting users have up-to-date experience asap. That’s where resource versioning goes in.

The idea is, you set a quite long period for caching your JS and CSS, and, for every update you make to your JS/CSS, you put a different URL for browsers to load. For example you can just append the hash of the contents to the filename upon generation, if you use something like Webpack. This way browsers still cache resources, and also new ones are loaded as soon as they are published.

One another problem is resources that need to be cached per-client or per-user basis. and that’s what the Vary header does. For example Vary: Authorization tells browsers and caches that a resource of the same URL can be different if the Authorization header is different.

Server from a cookieless domain

Your website is likely to use cookies, or use a service that uses cookies, and because cookies are sent on every request to server, these can be a total waste for requests to things like scripts and styles. Thus, serve static resources from a separate domain, such as static.example.com

Reduce redirects

Chaining redirects can drastically increase the page load time, as for each redirect, a new request has to be initiated. And obviously these are not done in parallel because the browser never knows in advance what URL is next.

ETags are quite important

An ETag is a small hash your web server can send, if configured, to browsers so that when they try to request a resource after the cache period expires, they match the stored ETag with the one provided, if they match, the cached resource is used again and no content gets downloaded. This also is useful for resources that are set to no be cached, this way they’re not entirely redownloaded every single time.

Load resources from CDNs

There are several reasons to load resources from stuff like cdnjs:

  • CDN servers are typically very fast and responsive
  • They server the user from the nearest possible location
  • Browser can download multi-domain resources in parallel
  • They mostly have little to no downtime
  • They reduce the load on your own server
  • Browsers might have already cached a resource when another website loaded that same resource.

Fetch resources in advance

Prefetching is giving browsers a hint on what resource is going to be used next, this allows the browser to fetch the resource before the user does that action that requires the resource. This allows instant experiences.

For example when you Google up something, and when Google things you’re very, very likely to click a certain result, it gives your browser a hint to prefetch its resources, the end result is when you click the result the page loads instantly — almost 0ms delay.

See this awesome post to set that up. It’s pretty simple.

Use async and defer

That’s really how these work

async and defer are too important keywords, they tell browsers how and when to load scripts, and in what order. See this for more information.

Use a fast DNS

Every time a client tries to connect to your website, it has to translate your domain to an IP address. They ask the DNS for that. Make sure you use a fast DNS for your domain. This can also affect how long it takes for changes you make to your DNS config to take effect.

I recommend Cloudflare’s.

Enable HTTP/2 and SPDY

These are pretty new protocols, and not many websites do use them as far as I know. Enable these on your server so that enabled clients can benefit from that.

Use appcache if it is a webapp

Consider WhatsApp Web. it’s a bit more than a simple website, it is more like a desktop application, and it’d make sense to maybe access that even while being offline.

That’s what appcache does, it tells your browser what is only accessible when the network is connected, and what can be loaded when the browser is offline. See this for guide.


The End

So yeah, that’s probably all I know for now. I’ll try to keep this updated with the latest information as much as I could, and I hope this article made sense to you.

If you find this article useful, like, idk, clap/share/whatever. ❤