Tooling is Not the Problem of the Web

Sebastian Markbåge
5 min readMay 15, 2015

--

Tooling is the Future of the Web

Disclaimer: I work at Facebook — fulltime on React.js and Web Standards.

I follow a lot of Web people. I’ve been in that space for a long time, and so have they. They’ve been people I looked up to and learned from. However, every once in a while I see this tired old narrative repeated:

“The web platform is good enough. People are just not smart / educated enough to use it.”

This narrative tends to be repeated over and over by people that became Web experts in the early 2000s or even 90s. Many of them are now on browser teams and web standards. These are the web idealists. Unfortunately I think that this mentality is what is killing the Web.

Insulting

IMHO, this view of the Web is out of touch and insulting. It overlooks the huge number of extremely talented engineers that have tried to make the Web work for them. When Facebook went native, these are the people that thought it was from a lack of talent or effort. Believe me, if you had seen all the work that was tried and dropped, to make Facebook work on mobile web at the time, you would be convinced that the Web wasn’t ready. Similar work continues to make the Web work for companies and organizations all around the world.

I’m sort of making an argumentum ad populum but I don’t believe all these talented people tried and failed for no reason. They tried because the status quo wasn’t good enough.

Meanwhile, a few of the smartest, humblest and mature thought leaders in this world realized that the Web is not good enough. More than that, it will never be good enough if we continue thinking that browser vendors have the knowledge and bandwidth to cover all use cases by themselves.

This is the foundation of the Extensible Web Manifesto.

It just makes sense. How can a small group of people agree on a high level API that will work for every developer in the world… in the future?

Libraries over Low Level Features

The proliferation of libraries came first, then the polyfills, then the package managers, then the transpilers etc. This is not some random event in the history of the Web. The painful realization is that we will never reach a point when the Web is done. If you are dreaming of a better time when everything you ever needed was built-in to the browser, then you’re going to have a bad time. The fact is that ES6 isn’t even implemented in browsers and a large number of developers using modern techniques are already using ES7 features.

Sure, some features of the modern web are unnecessary. Some tools are bloated or just bad. However, every day someone is building a new tool or feature that solves an important new use case.

The only time you can say that the Web is “good enough” is when you’re building for yesterday’s Web. That is how many of us started. We kept building the same kind of websites over and over again by just writing HTML and some simple JS. Our community then realized that it was unnecessary to build the same things. We created reusable software. E.g. Blog software like Wordpress or Medium.

The future of the Web is in innovation, not the same old products. Use cases that we haven’t even thought about yet. The only way that the Web can support that is by exposing more lower level features. Lower level features can support more use cases than a constrained high level API can. Even if you can implement it, it requires a huge number of fragile hacks and expertise to target an API in a way it wasn’t intended to be used. The Web is not a platform for innovation today.

What About Initial Load?

Which brings me to PPK’s post “Tools don’t solve the web’s problems, they ARE the problem”. In this case he is referring to the Facebook product “Instant Articles”. He makes the point that we download way too many things and execute way too many things on start up which makes it slow to load individual articles. He makes the point that the traditional old-style web would be perfectly suitable for this use case.

PPK’s argument doesn’t take into account that this team has done a lot of work to optimize the previous Web based experience. Including prefetching and storing resources on CDNs if possible. I’m not involved with that team. I don’t know anything about the choices to create this product. From a technical perspective I would suspect it has more to do with the slow moving tech in Apple’s monopoly on the iOS WebViews than anything within it.

There are key features of this experience that are difficult to replicate with just traditional Web. The lack of control of even one simple feature leads teams like this to rebuilding the system from the ground up.

Cache

If the future of the Web is not built by browser vendors but by client side libraries, how will we get fast initial downloads? Is it server-side HTML/CSS rendering? No. That only give us access to the features already in the browser.

Then we need to download the JavaScript for every page load, right? No. You don’t download the DOM for every page. The new version of the DOM is automatically downloaded by the browser and then it is cached. We can do the same for libraries.

This is an important read: Why isn’t jQuery, Underscore or even React bundled in the browser? It would help speed up initial load and could potentially even be preexecuted for quick start-up. The ideal solution isn’t to bundle more things into a browser auto-update bundle. It would be better to break the browser apart into smaller bundles, optimistically prefetch common ones per user, and enable better caching primitives. Service Workers is one good step in this direction.

One day you’ll download a JavaScript version of the DOM that renders to WebGL. The DOM is just another library.

There are still infrastructure problems to make this work well. However, make no mistake, the future of the Web isn’t the 2005 DOM/HTML document. It is prefetched and cached user space libraries.

More tools, not fewer.

--

--