My First NPM Module

Rifdhan Nazeer
7 min readMay 31, 2017

--

I recently published my very first NPM module: weighted-randomly-select. I am proud to have open-sourced it (under the MIT license), and it has already seen over 50 downloads since it was published on Thursday. I am writing this post to share some of the troubles I encountered on the journey toward publishing that first module.

Deciding what to Make

The first issue I faced was picking an idea for my first module. I wanted to select some piece of code that I’d already written and polish it, not write something completely new. Naturally I started by browsing through the source for a Discord bot I made in JavaScript, which is currently my biggest JavaScript-based personal project.

Looking through the various utilities and helper functions, I came across some code I had written that randomly selects an option from a given list of weighted choices. I use this function often through the code base to generate random outputs, or present some variation in the output for a common command (so that it’s slightly different every time instead of identical). It seemed like the perfect candidate to package and publish, as it was fairly short, concise, and had no external dependencies. Most importantly, it seemed like it had a wide variety of use cases, and would be useful to other people.

Polishing to Perfection

The next step in the process was to improve the code to a standard that I’d be comfortable sharing publicly, since open-sourcing it was an important part of the experience for me. The code at the start was visibly hacked together, and had a few edge cases unconsidered and other embarrassing flaws. Take a look below and you’ll see what I mean.

Random selection code before polishing

Don’t get me wrong — the code wasn’t low quality. It was written clearly, well-formatted (which is a luxury in some code bases), and well-documented (unheard of in many places). I just saw a lot of little things that could be improved with it, so I got to work on fixing those things. Here is a simplified version of the code post-polishing (the latest sources are available on my Github):

Random selection code after polishing

As you can see, the code become quite a bit longer (more than double the raw line count). Almost all of this extra code was input validation. You see, one of the many luxuries of writing code completely by yourself is that you can assume things about how you will subsequently use that code. A great example of this is input validation. When writing utility or helper functions, I rarely perform any checks to ensure the given input(s) are valid before proceeding, because I trust myself to provide valid inputs whenever I use those functions. I leave it up to testing to ensure that I didn’t provide invalid inputs accidentally. For the most part, this works just fine, and I am able to get away with writing just the functional code itself, and not “wasting” time on validation checks that are superfluous.

However when writing a module for public use, such assumptions cannot be made. We are all accustomed to assuming the worst from our fellow programmers, and wall up our code with validation checks for every conceivable user error. Thus I had to go back and add in all those extra checks to the function. Notably, however, I did also improve some of the functional components of the code along the way, such as discarding choices provided with zero weights before doing the random selection. The code came out more robust and of a higher caliber after it went through the polishing process, so it’s hard to complain. But that’s not the end of the story.

Efficiency vs Validation

While there is much to be gained by adding validation checks left right and centre, we do lose something along the way: efficiency. Doing all these extra checks on every call to the function means you spend potentially valuable processing time doing those checks over and over. As I mentioned earlier, I trust myself to not provide invalid inputs, so this extra processing for validation is essentially just wasted time. I’m sure that many developers feel the same way about their code.

I wanted to find a way to provide the efficiency of code without validation checks, without losing the robustness and ease-of-debugging of code that does include validation. I settled on the idea of breaking up the API into two functions — one that performs validation, then random selection, and one that does just the random selection. I exposed both in the API, so the user has the option of having validation, or forgoing it for performance. This way I could retain the performance level of my old code (as could anyone else), but it was still possible to add on the validation checks in case things were not working as expected.

Unit Tests

What responsible NPM module doesn’t include a suite of unit tests? Of course, my function didn’t include any unit tests as originally written, as I just tested it a few times and assumed all was well (in my defense, all was well for my purposes). So I got to work implementing a few unit tests with Mocha, Chai, and Sinon. While I had worked with unit tests using these frameworks in my recent internship, I had never had a chance to set them up from scratch. A few takeaways from that experience:

1. The way you import Chai’s expect and should are actually different. This is important to keep in mind and easy to mix up. The correct way to import both are as follows:

const Chai = require("chai");
const should = Chai.should(); // Notice we call the function
const expect = Chai.expect; // Notice we don't call this one

2. If you include a done argument in the function for an it() block, you must call it, regardless of whether your code does anything asynchronously or not. This optional argument is intended to be used with asynchronous tests, where you eventually call done() upon completion of all the asynchronous tasks. If you do not call it, however, the test will simply timeout. For example:

// This test will timeout
it("should do a thing", done => {
expect(true).to.be.true;
});
// This will work
it("should do a thing", done => {
expect(true).to.be.true;
done();
});
// Or more preferably, just don't include it!
it("should do a thing", () => {
expect(true).to.be.true;
});

Publishing to NPM

The publishing process wasn’t too hard. NPM publishing is done entirely through the command-line, which is odd for a service with a web interface, but I got learned the ropes pretty quickly. I followed this guide for the basics, and soon enough my package was up and ready for downloads! Of course the very first download was by me, as I burned the code I originally wrote in my Discord bot’s sources, in favour of the new module. It has been serving me well since!

Automated Deployment with TravisCI

Updating the NPM package is naturally a two-stop process. I first had to push a new commit to Github, then second, publish the new update to NPM. I wanted to make it easier for myself to deploy new updates, so I used TravisCI for continuous integration, and to automate deployments. Whenever I push a new commit to Github, TravisCI will run the unit tests, and if they pass, automatically publish the update to NPM (build logs)! An easier life is always a good thing, and automating things is always fun too.

For details on setting up TravisCI for your repo, see this guide and this guide. The setup process for auto-deploy wasn’t as straight-forward as it seemed like it would be. I started by signing into TravisCI with my Github account, and setting up a TravisCI service under Settings > Integrations & services on the Github repo. Note that I left the User/Token/Domain fields blank when doing so. This got the first part of the flow working — pushing new commits would trigger TravisCI builds and tests automatically.

The automated deploy part was a bit trickier. The guides I referenced earlier tell you to encrypt and include your NPM API key in the .travis.yml file directly, like so:

deploy:
provider: npm
api_key:
secure: "W4Dm3g6..."

email: "rifdhan.nazeer@gmail.com"
on:
tags: true

However, this approach didn’t work for me for some reason. During the deploy step on TravisCI, NPM complained that I was not logged in. After spending a lot of time searching for solutions, I eventually tried creating an environment variable through TravisCI’s web UI, and referencing the environment variable in my .travis.yml file as follows:

deploy:
provider: npm
api_key: $NPM_API_KEY
email: "rifdhan.nazeer@gmail.com"
on:
tags: true

This method ended up working just fine, and now TravisCI automatically deploys to NPM on successful build and testing. Note that the environment variable defined in TravisCI’s web UI is the raw API key, without encryption. The option to print it in the logs is disabled, for obvious security reasons.

Now to fully publish new updates to my package, all I have to do is tag the relevant commit with a SemVer version code (eg. git tag v1.0.4), and push it to Github with git push --tags!

Conclusion

Making an NPM module isn’t as daunting as it may seem. I’m glad I found the motivation to publish this module, and look forward to publishing more in the time to come. I hope you found some insight into the package-creating process, as well as some of the additional nice-to-have functionality, and perhaps even the inspiration to publish something of your own!

--

--