Modern Modules

Mikeal Rogers
6 min readSep 10, 2017

--

Re-thinking the Node.js ecosystem for modern JavaScript.

A few months back I sat down to write some code.

Node.js 8 had been out a while and I decided to take advantage of some of the new language features like async/await in my new code.

Over the next month I wrote a half dozen small modules.

By the end of the month I’d made some pretty huge changes to how I do open source development and how I see the Node.js ecosystem moving forward.

I no longer believe that I can handle even moderate maintenance burden from modules I create.

After working on Node.js’ governance process I have a lot of comfort and familiarity in writing and running liberal contribution policies. I think these practices are key to sustainability, especially for large projects.

The problem they don’t solve is the aggregate maintenance burden of many small modules which have not yet gotten large enough to retain maintainers.

Recently, every time I create a module I feel like I’m writing a check for future time and attention I won’t be able to pay.

In order to offset this burden I adopted a set of practices pioneered by the Hoodie team using some of the infrastructure that has spun out of that community.

  • I no longer do any releases by hand, releases are entirely automated with semantic release and run on every checkin.
  • I don’t release a project until it has 100% test coverage and going forward test coverage cannot drop below 100%. This is easy to do when you start a module but incredibly difficult to achieve after a module has matured.
  • I have npm scripts for creating nice commit messages, running coverage reports, automating git hooks for verification, etc. This helps me and other contributors align with the tools.
  • All projects have greenkeeper enabled, automating all dependency upgrades.

This drastically reduces the maintenance burden of each module. It makes me confident that I can handle the work of reviewing and merging pull requests as a project grows and eventually retains other maintainers.

Most of the software infrastructure I have used and built over the last 7 years, since the beginning of Node.js, gets in the way of using modern patterns.

Node.js has not had any large hard breaks in compatibility. Small things have been deprecated or changed but nothing as fundamental as replacing core patterns.

There probably never will be a break in compatibility at that scale, the community does not want a Python3 level event that splits the compatibility of the ecosystem in two.

But that doesn’t mean we’re done innovating. That doesn’t mean there aren’t new patterns that will need to replace the old. That doesn’t mean that some patterns were so flawed that we might need to abandon them.

Every module I’ve written in the last 7 years was tightly bound to two core patterns: standard error first callbacks and Streams. To say that I’ve become skeptical of these patterns would be an understatement.

I was never compelled by the arguments made in favor of Promises, which pre-date even Node.js. For a variety of reasons I won’t dive into, I don’t think that composing futures as object chains is a good idea. I still don’t think those patterns are an improvement on standard callbacks. But async/await is quite different.

Under the hood, promises enable these new language level features, but once they are syntax their effect is completely different. Once they are syntax they get to offload a lot of the prior complexity of using callbacks and promises to the rest of the language.

With async/await you don’t have two error handling patterns. With callbacks or with promises you had one way to handle errors in IO operations and another in synchronous operations. With async/await it’s all the same.

You don’t have to go out and find a bunch of new libraries for iteration, manipulation, etc. You can use regular for and while loops with await, you can return early to end iteration, you can catch all errors with a try block.

With async/await you have one unified set of patterns rather than two.

But this means a big shift in the ecosystem. It’s not only a matter of wrapping old libraries in promises, we need to re-think most of the APIs we’ve built from the perspective of these new patterns.

Streams may have been a mistake.

The reasons we created Streams were quite sound. You’ve got a bunch of iterative data-structures that need to be piped together and need to have a flow control mechanism.

But the first two implementations of Streams in Node.js, which are still supported to this day, missed the mark.

Many of the people involved in early Streams work have moved away from them, citing performance and composability concerns.

I’ve become skeptical on another level as well. The more I work with content addressability the more I see value in breaking streams into chunks that can be hashed independently, allowing verification and resume much more easily.

And finally, the more I work in the browser, where you have no access to flow control information, the more I realize that we have to scale these down to simple read pulls in order to have natural flow control.

There’s a proposal brewing for async iterators which I think will shake things up even further. In the meantime, I’ve scaled everything I do down to individual read and write calls, usually with some hashing for content addressability.

The Node.js ecosystem did browser support backwards.

Quite early in Node.js the browserify crew (mostly substack) started building polyfills of Node.js stdlib APIs for the browser. This was amazing at the time because:

  • Node.js had far more interoporable modules than typical browser libraries, which were almost entirely monolithic frameworks.
  • The polyfills were quite small because browsers had very little support for binary, http client, etc.

Browsers have gotten a lot better since 2010. Which means that the polyfills for the Node.js standard libraries got much larger to support both new and old browsers, and to take advantage of better support in the underlying browser APIs.

The Node.js ecosystem also changed quite a bit, with most modules being used in browsers more than in Node.js.

What hasn’t changed is the network, it’s still slow and browser users still hate large bundles.

It makes far more sense to write modules that directly use new browser APIs and polyfill for Node.js and older browsers when necessary.

Most Node.js users don’t care about an extra 200K for a Fetch polyfill, it’s a drop in the bucket. But browser users care a hell of a lot about a megabyte of Node.js stdlib polyfills.

I’ve been working on a successor to request called r2. It makes heavy use of async/await but the big shift is that it is built on top of Fetch instead of Node.js Core. Node.js users get a Fetch polyfill, which is also used heavily for testing automation.

When running through browserify request is ~2M uncompressed and ~500K compressed. r2 is only 66K uncompressed and 16K compressed.

It’s hard to look at these numbers and think that I haven’t been handling this backwards for years.

We need to rebuild most of the software infrastructure in the Node.js Ecosystem.

In order to move forward we’re going to have to stop using a lot of the software infrastructure we rely on today but first we need solid alternatives.

I’ve started to take this on, as time allows. I’ve already mentioned r2. I’ve also written an async/await bi-directional RPC library called znode.

And finally, I’ve setup a Patreon page to try and grow financial support for all of my open source work. As I’ve gotten older the commitments I have continue to grow and it has become clear that, without a path to direct funding, this work will eventually get pushed out for other commitments.

If you add a picture to your Medium post the embeds look much nicer :)

--

--

Mikeal Rogers

Accurate predictions about things that already happened.