Everything else in our world is built to a quality standard. Why can’t our code be as well?

How to Improve Code Quality on npm.

Code quality on npm swings, hard. While it is undoubtable that npm contains some packages which are exemplar of the best code in the world, others packages are completely hacks. There might be security flaws, a lack of documentation, or everyones favorite: breaking changes which don’t increment the major version number. The problems systemic with many npm packages goes on and on to the point of insanity.

Code quality on npm swings, hard.

This isn’t npm’s fault, but rather the community it has created. Yet, in some ways, all of these problems could be a good thing. It allows the community to perform wild experiments and distribute them across the world. However, those wild experiments often cost the community when someone comes along looking to build a business critical service and she has five “wild experiment” packages which could be used. Instead of picking one which could be a huge liability, the best answer—time and time again—is sadly to just create your own package (have you looked for a data validation package recently?).

This happens because there is no metric. There is no way to measure the quality of a package short of reading the source code and doing a background check on the author. Why can’t we automate these steps? Why can’t npm take the steps a developer might normally do to evaluate a package and enforce these steps before the package even gets published?

You might not know this, but elm-packages actually detects when a breaking change occurs and enforces semantic versioning in addition to requiring documentation for every public facing function. elm-packages does this, so why can’t npm?

Wild Experiments

Well, it would totally break backwards compatibility. Most of the packages hosted on npm would likely be invalidated by the new checks. In addition, quality checks would prohibit “wild experiment” packages, which keep the community healthy, from ever being shared.

Therefore quality checks should be opt-in. For packages which choose to opt-in, their packages will get preferred in the search ranking algorithm on npm (finally a good search algorithm!). Furthermore, these packages should also get a badge next to their package name wherever it appears on npm to distinguish that package from the rest.

And their doesn’t have to be but one set of quality checks. Their would, of course, be a default set by npm; but this is a feature which could easily be extended by frameworks and organizations to add their own quality checks. React, Angular, Google, IBM, or anyone else in the community could provide their own quality checks. Any package passing these would be given a stamp of approval by the checks’ author organization, lending that package some of its credibility.

The critical piece is that a package may not be published unless all of its quality checks (by whomever) pass.

Imagine what happens in a world where package quality is enforced.

You could install and use a package without having to do extensive background checks and research. You would always have documentation for any public facing method. You could sleep safe knowing that a breaking change will actually be semantically versioned. You could enforce that only dependencies which pass your companies code quality guidelines may be used. Your packages of excellent quality and utility could finally be discovered by the larger JavaScript community.

Plausibility

To discuss plausibility (in an implementation context) let us ask, what are some of the checks npm might choose to require of a package before it gets published? Here’s a list of things I’d like:

  1. Enforced semantic versioning. If a breaking change happens in the public API the author has no choice but to increment the major version.
  2. Required documentation for all public facing features. Using JSDoc presumably.
  3. 90% code coverage of tests or better. This ensures that the package does what it says it does.
  4. Package has a license.
  5. Pull requests get merged and issues get addressed.

All of those are fairly easy to check in the present, except the first two. Elm does some static module analysis to perform the first two checks, but is that possible in JavaScript? JavaScript has a bunch of dynamic “require”s and “module.exports” which is impossible to statically evaluate. Tools like webpack and browserify try, but they can never completely understand the code until runtime.

Well, luckily JavaScript is not a dead language and ES6 brings with it a way to statically define module dependencies and exports.

Get excited, because ES6 modules will and are revolutionizing developer tooling. Just check out eslint-plugin-import, this linter plugin can explore your ES6 imports/exports and tell you if a value you are trying to import does or does not exist. Theoretically a tool based on ES6 modules can also provide benefits like autocompletion, a huge deal for developers used to an IDE. ES6 modules are a really big deal, and they give us a couple huge benefits when it comes to enforcing quality.

With a package which implements ES6 modules, npm can know exactly the API which the package exports. With that knowledge, npm can require each export to have a JSDoc comment, and if a package removes an export npm will know and can require a breaking change in the version number.

At the moment ES6 modules are only supported in a transpiration environment, but that doesn’t stop npm from using standards like jsnext:main to get the ES6 file before it’s supported in node.

We can have the future now. We just have to pull it towards us.


I have a lot of ideas on an exact API for a feature associated with this idea, and I’d be willing to share (and maybe even develop) if there is enough interest. The best place to express that interest @calebmer on Twitter.

It would be awesome to see this feature implemented by npm, so let’s make some noise.