It’s Game Time for the Web

I hope you watch Alex’s latest presentation showing how the computer in your pocket massively under performs due to the fact that the battery size limits how much we can fully utilize what is available, especially when married to the fact that there are major issues of heat dissipation and the fact that some of the tricks that the OS employs (e.g. boot up the cores when launching an app) don’t fully apply to loading a new web app within a browser.

Just as he sciences the shit out of the mobile devices that our users have and their constraints, we need to do the same as web developers on doing everything that we can to build experiences that work.

I mentioned in my own talk this week that I feel the Web is still evolving from the world of desktop apps to mobile. The browsers are ahead of the pack, getting a lot of optimizations around how to work in that environment. With Progressive Web Apps we have gotten the key capabilities we need to be mobile native, with more coming there too. But what about our frameworks and other end user code? Sometimes it feels like Alex is being “anti-framework” when he pushes on what he is seeing in the real world, but take a closer look. He isn’t ranting about particular frameworks, he is just frustrated to see the common outcomes where grabbing a framework these days often doesn’t end with a great experience out of the box.

The frameworks that we have also have to be optimized for the same mobile constraints, and we need to build new patterns. The good news is that this has been happening. The first real PWA was Flipkart’s mobile commerce app, and it is very much React. They just made sure that they optimized it for mobile performance, and continue to do so. A large reason for Angular 2 to exist was to re-think core pieces to allow for good mobile performance. And of course Polymer did the same, resulting in examples such as Shop that have traces good enough for Alex to cry a tear of joy.

Our priority is mobile performance

The Web is such more than just mobile, but the core existential threat lies here right now. If you can’t paint something quickly and then become interactive shortly after the user may have already moved on. Your mission is to get a first load going in 3 seconds on 3G, and it’s hard. If the user is coming back again you have hopefully gotten a service worker up and running and you can get down to 500ms, especially if you can use data from the cache first while you wait on fresh data from the network.

Given that this is our mission, and it is a real challenge, we need to use as many tricks and illusions as possible. Smoke and mirrors are time honored tradition in computing, both for users and for development. A user doesn’t need to comprehend that the computer is running through a slew of tasks and only repainting the cursor at the last minute before the user would notice. A developer doesn’t have to concern himself about the assembly that is running, or the physical work that is being done as they read from a socket. These abstractions allow us to get closer to the user and build things at the right level of complexity.

The power of build and deploy time abstractions

One of the great things about the early web was its simplicity. You could throw up and edit some <markup>, hit refresh in the browser, and iterate away. There is something so alluring to this, and whenever we put build steps between our source and deployed artifacts people get queasy. But why?

Development iteration time

if it takes real time to go from hitting save in your editor and being able to see and work with the changes you will not be able to stay in the flow. Over time though we have built tools that not only make the time short, but allow for hot deployment so you don’t even need to always refresh your experience and navigate to where you were.


When something is going wrong you need the illusion of seeing the problem in your source artifacts. You can’t troll through obfuscated craziness and manually tie back to the source. With tools such as source maps we get to solve these important problems.

So, while I love the idea of making quickly changes to HTML, CSS, or JavaScript and not having tools doing too much magic in between, I am sorry, for where we are right now I think that we need to embrace build and deploy tools even more.

Why would I want this abstraction that gets in the way? Do hard work where you have time. The Web is in an extreme situation. When you send down your first page load the browser has to bootstrap a lot in a short time window:

  • Download resources, finding and getting dependencies
  • HTML => DOM
  • CSS => styling and layout
  • Evaluate and execute JavaScript
  • and much more

Oh, and it has to do this progressively as information is streaming in (which is also the blessing that means that if we don’t fight the platform we can get AMAZING performance).

The fact that the browser has to do this all so quickly means it has to make real trade offs. Think of an image compressor or a compiler. With those tools you can choose the level of optimization based on the trade offs that you want to make. When in development you are looking for compile speed, and for debugging symbols to be available. For a production build you would happily let the computer crunch like crazy because the time that you have is basically infinite in comparison.

Why wouldn’t we want to use this time to crunch on our web apps like crazy? This is where we can learn from the world of games. Games is another area where performance is always top of the list, so game engines and developers use as many tricks as possible. They may go ahead and run assets through a pipeline that pre-renders shadows and various formats. They will trade off disk size and duplication for runtime performance in a heart beat.

With the Web we need the tools to optimize for initial load, subsequent loads, and general runtime performance. Thus you can imagine tools doing extra work to get the initial load bundles optimized at the wazoo. You see this with code splitting with tools such as webpack and browserify. It can still be a lil bit of a pain, but if you don’t do this and are lazy (in a bad way) then you end up with a 2+MB bundle.js.

When you are running around a world in a game, the engine only needs to show you what you can see. It wouldn’t make sense for it to render the entire world would it? This is what we need to do in our web apps. We don’t have to preload the entire “world” of the application. Now, we may not want to load up the next part of the app just when the user asks to go there. It could be nice to have the data needed and ready for the room next door, especially if you can predict that the user will be going in there. Prefetching, HTTP/2 server push and other tricks like this can make sure that we are balancing not loading too much at the same time with making sure things are ready for you just in time.

We have some of the tools coming together now, but there is a lot of room to get better and also do more to let a developer work at a nice conceptual level that doesn’t push this baggage front and center. What do I mean here? One of the reasons I think developers like the React style of development is the mental model. Given state in the form of data, when I run through React I get back a certain running application. This style let’s me write really nice tests where I can say “I expect this view tree to come back given this state”. It kinda gets you back to the original web where every click rebuilt the entire world. The user experience hit that took was massive, but it sure was easy for developers to deal with! I am looking for more and more tools that give developers this consistency whilst doing a lot of optimizations behind the scenes.

I sometimes wake up with crazy dreams where the deployment system is smart enough to understand what optimizations to send down to a given browser with the current context. We have used compilers to spit out different versions for various architectures, so we could consider doing the same here. We could even gather analytics, run experiments, and change the deployment bundles based on real world usage (far out there I know). Black boxes can be scary, and when they go wrong that can be frustrating, but given the diversity that we see in the world I don’t see a better way to deal with the problem that giving right in. After all, we can’t expect developers to be able to test on every device, on various os’s, with various browsers, all with various versions, and on global cell networks that are incredibly variable. Human’s can’t solve this problem. We are sending unoptimized crappy experiences to users every day and we can do better.

It may change over time

The severity in the problem with the mobile Web makes me feel like we need to attack this with aggressive build and deploy step solutions. However, this may change over time. You can imagine a future where the entire globe has incredibly fast connections and devices with impressive capabilities for all. It may make sense to then have a world where there isn’t so much of a black box and dev source is much closer to production source. This may happen, or we may always be then just pushing the bar and always be playing some level of catchup where this is somewhat needed. All I know for sure is that we need this right now.

App Stores are different

It is interesting to compare the problem to native mobile platforms. By having the explicit install step we get to bypass some of these issues, but not all of them. It is common to have duplicate assets that need to all first be downloaded and then taking up space. I don’t want 6 sizes of content for various devices and sizes. I don’t want to download various native binaries that I don’t need. And, I really don’t want to download the huge amount of code and assets for $THAT_FEATURE that I never use in the application. We are starting to see better modularization and having app stores only send down what is needed, and I expect to see more of this, even though it adds complexity. The simple “one app with everything we have” approach, which is the default, is akin to the reload web in my opinion…. we will get past it onto a better world.

In conclusion

I hope that you now understand why some of us are speaking so strongly about the mobile constraints and the Web. Fire up Chrome DevTools and do a trace on a real device (Moto G4 + chrome:inspect), run WebPageTest on a Nexus5/Moto G config, and see how you are doing. Run Lighthouse on your application. If you aren’t there yet, take the time to setup your dev infrastructure so it do the hard work to get the experience up to snuff. Your users will thank you, and so will your business metrics.