Answers to Questions About Performance

Paul Lewis
Google Developers
Published in
6 min readMar 14, 2016

I got an email with some questions about performance. Matt Gaunt got a similar one, and he answered the questions out in the open, and I thought I’d do the same.

Let’s talk frameworks! Where do you draw the line between developer convenience and user satisfaction? Is budget the only defining factor in how you would approach a project?

I draw it almost entirely in the favour of the user, and I’m not alone in doing so: the very first item on Google’s corporate philosophy is “Focus on the user and all else will follow.”

Focus on the user.

It’s also what the W3C specify:

In case of conflict, consider users over authors over implementers over specifiers over theoretical purity. — Priority of Constituencies

To do that requires a good foundational knowledge of the web platform. For many people inexperience with the platform (we’re all new to everything at some point!), time pressures (inside or outside of their work), or existing technical decisions prevent them from making calls that they would otherwise make. But where it’s possible I always advocate for prioritising the user over everything else. Which isn’t to say that other considerations aren’t important, they’re just less important.

Do I set budgets? Yes, but only those based on RAIL to try and meet user expectations of what something means to be fast. Sometimes it’s hard to hit them, they’re something of a gold standard, but it’s worth having something to aim for!

Paul Lewis [I know that guy… total idiot — PL] offered some interesting thoughts on frameworks— do you think he is onto something, and do you think frames are bad for the end user? Is it fine as long as you clean it up (uncss, compression etc.) before shipping?

Do I think I’m onto something? Probably the first time in my entire career if so. Ell-oh-ell.

The reason I wrote that post was because I’d just had a poke around the React documentation and disagreed with one of its premises, namely that DOM mutation is slow, and that JavaScript is less likely to be a bottleneck. It’s rarely that simple.

As I got into that discussion a bit more I realised my general philosophy of web development is very different to many others’: I will naturally avoid an Inversion of Control.

In a framework, unlike in libraries or normal user applications, the overall program’s flow of control is not dictated by the caller, but by the framework — Riehle, Dirk (2000), Framework Design: A Role Modeling Approach (PDF), Swiss Federal Institute of Technology

I don’t like this state of affairs: who’s in control, and if (or, more likely, when) things go wrong, who’s responsible? If the answer to both of those questions is not “the developer” then there’s a problem. In short, with a framework you are responsible, but not necessarily in control.

To be super clear, a library that does a job for me is extremely welcome, whether that be date formatting, model storage & retrieval, routing, or history state management. I am a big fan of libraries, although again I will not include one if the code required by me is relatively small; it will largely depend on the number of edge cases it covers, and which User Agent(s) the intended users of the app will be using. If a library misbehaves in any way, I can swap it out for another. If a library is sufficiently large, or it has sufficient reach, it becomes indistinguishable from a framework.

To address the question a little more directly, “is it really a problem if you clean a framework up?” Yes, it can be. Even if you do all the right things from a bandwidth-usage point-of-view you can still see a long time for any JavaScript to be parsed, evaluated and all its objects be instantiated. This is all time that the main thread remains blocked.

Modern frameworks are shifting towards a server-side rendered view of the world, but that does nothing to progressively bootstrap their client-side code, meaning that you have a valley between initial server-side render and the app being interactive. The framework that can spread its initialization out, and which actively prioritizes progressive bootstrapping is one that supports a user-centric world. To date I’ve not seen any pursue that goal, but I may well have missed it.

I’m interested, with 4G, Google Fiber and whatnot, is web performance really that important? Why/why not? Do you have any rules on what good enough means to you?

I would suggest, if you are able, to travel to somewhere like India or Indonesia (assuming you don’t already live there, for example), where the predominant connection type is 2G. Alternatively, take an overground train in London and you will come up against patchy or non-existent connections. I, for example, can vent virtually unlimited amounts of frustration at the cell towers around Clapham Junction!

Oh trains in London, you are the worst for my Twitter polling.

It is a mistake to think that “connections are getting faster; what’s slow today isn’t an issue tomorrow.” We were doing just fine and dandy with our desktop sites, then smartphones happened. All of a sudden we found ourselves trying to figure out how it was that our A+ desktop experiences, on which we’d banked Moore’s Law being true, no longer functioned in a world of contrained CPUs, GPUs, smaller screens, touch input, and patchy 2G & 3G connections.

It strikes me that we’re better served working up from a base level (yay Progressive Enhancement). If it’s good at 2G it will be near-instant at Cable speeds. Nobody loses there.

Necessary plug: Service Workers help in a world of patchy and non-existent connections, and they can be added as an enhancement.

A blank website is unaccessible, so should website performance perhaps be placed in the same category as accessibility, colour contrast, legibility etc? Should there be a minimum loading time as with minimum colour contrast?

I think performance, accessibility, and security share some traits: they can’t be retro-fitted to a project, they’re often thankless tasks, and they’re only notable by their absence. They’re all, however, the bedrock of a good user experience, onto which you can layer high quality designs and interactions.

While I make blunders in all three areas (much more than I’d like!), I do try and consider them for every one of my builds.

Matt Gaunt also answered some of these questions in a Medium ­post. Comments or thoughts?

I mostly agree with what Matt said: the vast majority of web development, actually all programming, is a balancing act, which is why it takes experience and consideration.

If I can paraphrase a couple of his points (sorry if I got the wrong end of the stick on either of these, Matt!), these are the things I disagree with him about:

  1. “You need to be an expert to build without a framework.” You might make some mistakes if you build without frameworks (and you have to be tactical about when you do it), but if you don’t do it, how are you ever going to gain knowledge of the web platform? How can you be a mechanic if you’ve never tinkered with a car’s engine? That’s not going to be possible for everyone all of the time, and it’s perhaps a privilege to be able to do so, but where you can, I believe you should at least try.
  2. “The next step up from vanilla is a framework.” As I said earlier, I’m a big fan of libraries: they can shortcut a lot of hard work without inverting control. You can get very far with tactical use of libraries.

Any closing thoughts on web performance?

It’s worth the effort. The people at the end of the chain are why we build our experiences, and I think we owe it to them to try and build performant sites and apps.

--

--