The State of the Web

A guide to impactful performance improvements

The Internet is growing exponentially, and so is the Web platform we create. Often though we fail to reflect on the greater picture of connectivity and contexts the audience of our work might find themselves in. Even a short glance at the state of the World Wide Web shows that we haven’t been building with empathy, situation variability awareness, let alone performance in mind.

So, what is the state of the Web today?

Only 46% of 7.4 billion people on this planet have access to the Internet. The average network speed caps at unimpressive 7Mb/s. More importantly, 93% of Internet users are going online through mobile devices — it becomes inexcusable to not cater to handhelds. Often data is more expensive than we’d assume — it could take anywhere from an hour to 13 hours to purchase 500MB packet (Germany versus Brazil; for more intriguing stats on connectivity head to Ben Schwarz’s Beyond the Bubble: The Real World Performance).

Our websites aren’t in a perfect shape either — the average site is the size of original Doom game (approx. 3 MB) (please note that for statistical accuracy medians should be used, read Ilya Grigorik’s excellent The “Average Page” is a myth. Median site size is currently at 1.4MB). Images can easily account for 1.7 MB of bandwidth and JavaScript averages at 400KB. This isn’t a problem specific to the Web platform only. Native applications aren’t better; remember that time you had to download 200 MB to get unspecified bug fixes?

As technologists often we find ourselves in the position of privilege. With up-to-date, high-end laptops, phones and fast cable Internet connection, it becomes all too easy to forget this isn’t the case for everyone (actually, it’s true for very few).

If we’re building the web platform from the standpoint of privilege and lack of empathy, it results in exclusionary, subpar experiences.

How can we do better by designing and developing with performance in mind?

Optimising all assets

A request is critical if it contains assets that are necessary to render the content within the users’ viewport.

For most sites, it’d be HTML, necessary CSS, a logo, a web font and maybe an image. It turns out that in many cases, dozens of other, irrelevant at the time assets are requested instead (JavaScript, tracking codes, ads, etc.). Luckily, we’re able to control this behaviour by carefully picking crucial resources and adjusting their priority.

With <link rel='preload'> we’re able to manually force assets’ priority to High ensuring that desired content will be rendered on time. This technique can yield significant improvements in Time to Interactive metric, making optimal user experience possible.

Image for post
Image for post

Critical requests still seem like a black box for many, and the lack of shareable materials doesn’t help to change that. Fortunately, Ben Schwarz published an incredibly comprehensive and approachable article on the subject — The Critical Request. Additionally, check Addy’s Preload, Prefetch and Priorities in Chrome.

Image for post
Image for post
Enabling Priorities in Chrome Developer Tools

🛠 To track how well you’re doing on prioritising requests use Lighthouse performance tool and Critical Request Chains audit or check the Request Priority under Network tab in Chrome Developer Tools.

📝 General performance checklist

  1. Enable compression
  2. Prioritise critical assets
  3. Use content delivery networks

Optimising images

In some cases, similar results can be achieved with different technologies. CSS has a range of properties for art direction, such as shadows, gradients, animations or shapes allowing us to swap assets for appropriately styled DOM elements.

Choosing the right format

  • Vector: resolution independent, usually significantly smaller in size. Perfect for logos, iconography and simple assets comprising of basic shapes (lines, polygons, circles and points).
  • Raster: offers much more detailed results. Ideal for photographs.

After making this decision, there are a fair bit of formats to choose from: JPEG, GIF, PNG–8, PNG–24, or newest formats such as WEBP or JPEG-XR. With such an abundance of options, how do we ensure we’re picking the right one? Here’s a basic way of finding the best format to go with:

  • JPEG: imagery with many colours (e.g. photos)
  • PNG–8: imagery with a few colours
  • PNG–24: imagery with partial transparency
  • GIF: animated imagery

Photoshop can optimise each of these on export through various settings such as decreasing quality, reducing noise or number of colours. Ensure that designers are aware of performance practices and can prepare the right type of asset with the right optimisation presets. If you’d like to know more how to develop images, head to Lara Hogan’s invaluable Designing for Performance.

Experimenting with new formats

WebP is easily the most popular contender, supporting both lossless and lossy compression, which makes it incredibly versatile. Lossless WebP is 26% smaller than PNGs and 25–34% smaller than JPGs. With 74% browser support it can safely be used with fallback, introducing up to 1/3 savings in transferred bytes. JPGs and PNGs can be converted to WebP in Photoshop and other image processing apps as well as through command line interfaces (brew install webp).

If you’d like to explore (minor) visual differences between these formats I recommend this nifty demo on Github.

Optimising with tools and algorithms

If you’ve chosen SVGs, which are usually relatively small in size, they too have to be compressed. SVGO is a command line tool that can swiftly optimise SVGs through stripping unnecessary metadata. Alternatively, use SVGOMG by Jake Archibald if you prefer a web interface or are limited by your operating system. Because SVG is an XML-based format, it can also be subject to GZIP compression on the server side.

ImageOptim is an excellent choice for most of the other image types. Comprising of pngcrush, pngquant, MozJPEG, Google Zopfli and more, it bundles a bunch of great tools in a comprehensive, Open Source package. Available as a Mac OS app, command line interface and Sketch plugin, ImageOptim can be easily implemented into an existing workflow. For those on Linux or Windows, most of the CLIs ImageOptim relies on can be used on your platform.

If you’re inclined to try emerging encoders, earlier this year, Google released Guetzli — an Open Source algorithm stemming from their learnings with WebP and Zopfli. Guetzli is supposed to produce up to 35% smaller JPEGs than any other available method of compression. The only downside: slow processing times (a minute of CPU per megapixel).

When choosing tools make sure they produce desired results and fit into teams’ workflow. Ideally, automate the process of optimisation, so no imagery slips through the cracks unoptimised.

Responsible and responsive imagery

The srcset attribute

Image for post
Image for post
An example of srcset usage

The picture element

Image for post
Image for post
An example of picture element usage

📚 Make sure to read Jason Grigsby’s Responsive Images 101 guide for a thorough explanation of both approaches.

Delivery with image CDNs

CDNs can take out a lot of complexity from serving responsive, optimised assets on image-heavy sites. The offerings differ (and so does the pricing) but most handle resizing, cropping and determining which format is best to serve to your customers based on devices and browsers. Even more than that — they compress, detect pixel density, watermark, recognise faces and allow post-processing. With these powerful features and ability to append parameters to URLs serving user-centric imagery becomes a breeze.

📝 Image performance checklist

  1. Use vector wherever possible
  2. Reduce the quality if change is unnoticeable
  3. Experiment with new formats
  4. Optimise with tools and algorithms
  5. Learn about srcset and picture
  6. Use an image CDN

Optimising web fonts

Even when weight isn’t the most major issue, Flash of Invisible Text (FOIT) is. FOIT occurs when web fonts are still loading or failed to be fetched, which results in an empty page and thus, inaccessible content. It might be worth it to carefully examine whether we need web fonts in the first place. If so, there are a few strategies to help us mitigate the negative effect on performance.

Choosing the right format

When defining the sources of web fonts in @font-face use the format() hint to specify which format should be utilised.

If you’re using Google Fonts or Typekit to serve your fonts, both of these tools have implemented a few strategies to mitigate their performance footprint. Typekit now serves all kits asynchronously, preventing FOIT as well as allows for extended cache period of 10 days for their JavaScript kit code (instead of default 10 minutes). Google Fonts automatically serves the smallest file, based on the capabilities of the users’ device.

Audit font selection

Ideally, we can get away with one typeface featuring normal and bold stylistic variants. If you’re not sure how to make font selection choices refer to Lara Hogan’s Weighing Aesthetics and Performance.

Use Unicode-range subsetting

The Filament Group released an Open Source command-line utility, glyph hanger that can generate a list of necessary glyphs based on files or URLs. Alternatively, the web-based Font Squirrel Web Font Generator offers advanced subsetting and optimisation options. If using Google Fonts or Typekit choosing a language subset is built into the typeface picker interface, making basic subsetting easier.

Establish a font loading strategy

Implementing a font loading strategy prevents users from not being able to access your content. Often, opting for Flash of Unstyled Text (FOUT) is the easiest and most effective solution.

font-display is a new CSS property providing a non-JavaScript reliant solution. Unfortunately, it has partial support (Chrome and Opera only) and is currently under development in Firefox and WebKit. Still, it can and should be used in combination with other font loading mechanisms.

Image for post
Image for post
font-display property in action

Luckily, Typekit’s Web Font Loader and Bram Stein’s Font Face Observer can help manage font loading behaviour. Additionally, Zach Leatherman, an expert on web font performance, published A Comprehensive Guide to Font Loading Strategies which will aid in choosing the right approach for your project.

📝 Web font performance checklist

  1. Audit the font selection
  2. Use Unicode-range subsetting
  3. Establish a font loading strategy

Optimising JavaScript

What we might not realise is that there’s a much more sinister performance bottleneck hidden behind our beloved JavaScript.

Monitor how much JavaScript is delivered

Image for post
Image for post
Parse times for 1MB of JavaScript on different devices. Image courtesy of Addy Osmani and his JavaScript Start-up Performance article.

Analysing parse and compile times becomes crucial to understanding when apps are ready to be interacted with. These timings vary significantly based the hardware capabilities of users’ device. Parse and compile can easily be 2–5x times higher on lower end mobiles. Addy’s research confirms that on an average phone an app will take 16 seconds to reach an interactive state, compared to 8 seconds on a desktop. It’s crucial to analyse these metrics, and fortunately, we can do so through Chrome DevTools.

Image for post
Image for post
Investigating parse and compile in Chrome Dev Tools

📚 Make sure to read Addy Osmani’s detailed write-up on JavaScript Start-up Performance.

Get rid of unnecessary dependencies

Image for post
Image for post
Webpack bundle analyzer in action.

We can make imported package cost even more visible with Import Cost extension in VS Code and Atom.

Image for post
Image for post
Import Code VS Code extension.

Implement code splitting

Webpack, one of the most popular module bundlers, comes with code splitting support. Most straightforward code splitting can be implemented per page (home.js for a landing page, contact.js for a contact page, etc.), but Webpack offers few advanced strategies like dynamic imports or lazy loading that might be worth looking into as well.

Consider framework alternatives

Before starting a new project, it’s worthwhile to determine what kind of features are necessary and pick the most performant framework for your needs and goals. Sometimes that might mean opting for writing more vanilla JavaScript instead.

📝 JavaScript performance checklist

  1. Get rid of unnecessary dependencies
  2. Implement code splitting
  3. Consider framework alternatives

Tracking performance and the road forward

User-centric performance metrics

Those metrics are First Paint, First Meaningful Paint, Visually Complete and Time to Interactive.

Image for post
Image for post
  • First Paint: the browser changed from a white screen to the first visual change.
  • First Meaningful Paint: text, images and major items are viewable.
  • Visually Complete: all content in the viewport is visible.
  • Time to Interactive: all content in the viewport is visible and ready to interact with (no major main thread JavaScript activity).

These timings directly correspond to what the users see therefore make excellent candidates for tracking. If possible, record all, otherwise pick one or two to have a better understanding of perceived performance. It’s worth keeping an eye on other metrics as well, especially the number of bytes (optimised and unpacked) we’re sending.

Setting performance budgets

Unfortunately, there’s no magical formula for setting them. Often performance budgets boil down to competitive analysis and product goals, which are unique to each business.

When setting budgets, it’s important to aim for a noticeable difference, which usually equals to at least 20% improvement. Experiment and iterate on your budgets, leveraging Lara Hogan’s Approach New Designs with a Performance Budget as a reference.

🛠 Try out Performance Budget Calculator or Browser Calories Chrome extension to aid in creating budgets.

Continuous monitoring

Google Lighthouse is an Open Source project auditing performance, accessibility, progressive web apps, and more. It’s possible to use Lighthouse in the command line or as just recently, directly in Chrome Developer Tools.

Image for post
Image for post
Lighthouse performing a performance audit.

For continuous tracking opt-in for Calibre that offers performance budgets, device emulation, distributed monitoring and many other features that are impossible to get without carefully building your own performance suite.

Image for post
Image for post
Comprehensive performance tracking in Calibre.

Wherever you’re tracking, make sure to make the data transparent and accessible to the entire team, or in smaller organisations, the whole business.

Performance is a shared responsibility, which spans further than developer teams — we’re all accountable for the user experience we’re creating, no matter role or title.

It’s incredibly important to advocate for speed and establish collaboration processes to catch possible bottlenecks as early as product decisions or design phases.

Building performance awareness and empathy

As technologists, it’s our responsibility not to hijack attention and time people could be happily spending elsewhere. Our objective is to build tools that are conscious of time and human focus.

Advocating for performance awareness should be everyone’s goal. Let’s build a better, more mindful future for all of us with performance and empathy in mind.

Written by

Design Lead @calibreapp ✻ Past curator @jsconfau, @cssconfau & @jsconfeu ✻ Here to challenge status quo.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store