⚙ Integration tests on Node.js CLI: Part 1 — Why and how?

Andrés Zorro
6 min readMay 29, 2018

This article is part of a series about writing Node.js CLI programs and more specifically, testing them by writing E2E/Integration tests, rather than writing the usual module unit test. If you want to skip to the final implementation code, check here. The links to the other parts are listed below:

Unit tests passed! That's enough testing!

Have you ever thought about building a CLI to ease your life? One of the first things you learn to do in any programming language that its main usage is not to create a web server/interface, is to write a Command Line Interface (CLI) program. Ever since we JavaScripters got Node.js to play around with server-side code, a new world of possibilities opened up and there's a lot of us investing time creating some of the most amazing tools that the JavaScript community has ever seen (e.g. yarn). Building quality software also requires a certain discipline around testing, which helps improving consistency across the codebase and helps adding new features and refactor faster. On Node.js world, writing unit tests is really cheap and fast, and in most cases very reliable. With a couple of tests in the main module (if you abstracted it correctly) you can be sure that the inner logic is working properly and everything works according to your plan. But what if you want to test the actual interface? What if you decide that ensuring consistency in the inner module is not enough and you want to make sure your users see what it's expected and not some random undefined, or what if you want to test error messages in a consistent way?

Writing integration tests to a CLI tool is not as straightforward as it seems, since you're essentially running a process (the test runner) to run another process (the CLI). This involves creating one parent process and of course, a child process. And that also means controlling the input and evaluating the output of your tool, so you can evaluate the output as if you were running the tool directly in a terminal. Luckily, Node.js provides a native way to deal with all of these interactions, and even some features that if you don't know about them may blow your mind — I was amazed, for real.

Want a pizza?

For this example, I'll reference the good ol' pizza example program written on the awesome commander module. This is a good place to start in order to think about testing a program written for CLI. One thing before we start: we're assuming the use of Node.js version ≥ 8, which includes async/await support. If you're still not familiar with the syntax, check this awesome article about it. Let's get to the code!

This simple program lets you order a pizza, and pass certain options, so the output is the sum of what you ordered. E.g.:

$ node ./pizza.js --peppers --cheese gouda
$ you ordered a pizza with:
$ - peppers
$ - gouda cheese

Writing a test for this command would involve passing these arguments to the command and checking whether the output is consistent with the input. For this purpose we'll write a test in mocha/chai syntax, since it's one of the most populars for JavaScript testing:

// Pizza CLI test: Take 1const expect = require('chai').expect;
const cmd = require('./cmd');
describe('The pizza CLI', () => {
it('should print the correct output', async () => {
const response = await cmd.execute(
'path/to/process',
['--peppers', '--cheese', 'gouda']
);
expect(response).to.equal(
'you ordered a pizza with:\n - peppers\n - gouda cheese'
);
});
});

There's a couple of caveats about the above code. First of all, we're assuming that the code run is asynchronous, since we don't know when the program is going to respond. We can't assume it is going to execute right away because we need to run a process that is not mocha process. We'll get to that later.

The second thing to be aware is that console.log appends a line ending character to each logged message. Therefore, from the pizza program, you can assume that there are at least two line breaks, from the logged statements. This character may vary across different operating systems. So we'll need to tackle that later on.

But I believe the biggest question is about this cmd.execute method that I use in order to execute the program. It's not referenced anywhere else, and the reason is because we're going to define it. But first, let's break down its pieces. The first one is the child process. This is an oversimplification, but it gets the main points:

const spawn = require('child_process').spawn;function createProcess(processPath, args = [], env = null) {
args = [processPath].concat(args);

return spawn('node', args, {
env: Object.assign(
{
NODE_ENV: 'test'
},
env
)
});
}

This method creates a child process with the supplied path and assigns all the arguments/properties passed. No big deal. Now for the application:

const concat = require('concat-stream');function execute(processPath, args = [], opts = {}) {
const { env = null } = opts;
const childProcess = createProcess(processPath, args, env);
childProcess.stdin.setEncoding('utf-8');
const promise = new Promise((resolve, reject) => {
childProcess.stderr.once('data', err => {
reject(err.toString());
});
childProcess.on('error', reject); childProcess.stdout.pipe(
concat(result => {
resolve(result.toString());
})
);
});
return promise;
}
module.exports = { execute };

This is a promise wrapper over the child process that we created, so we can test it using async/await syntax in tests. The good thing about this is that mocha supports this syntax out of the box and it's really easy to write test cases. The child process returns an instance of ChildProcess, which inherits from EventEmmiter class. It also returns stdout, stderr and stdin, which are exposed as readable streams. That means that we can subscribe to events using the Stream API. Maybe adding a promise wrapper to a readable stream might seem dumb (Streams are more powerful and expressive), but for this case, it allows to write test specs in a more readable manner, instead of writing event listeners. We're also using concat-stream module to gather all output from the CLI tool, in case the output is shown at different times. In this case, it's pretty quick because it's logging output directly, but sometimes a program might take a while to output something, such as calling a service, or doing some heavy computation, etc.

Let's add some things to the spec:

// Pizza CLI test: Take 2const expect = require('chai').expect;
const cmd = require('./cmd');
const { EOL } = require('os');
describe('The pizza CLI', () => {
it('should print the correct output', async () => {
const response = await cmd.execute(
'path/to/process',
['--peppers', '--cheese', 'gouda']
);
expect(response.trim().split(EOL)).to.have.all.keys(
'you ordered a pizza with:',
' - peppers',
' - gouda cheese'
);
});
});

By adding EOL constant and splitting the response, we ensure each of the lines returned are the same ones we see in the output as if we were using the program. Also, by using the concat-stream module, we don't need to worry about when these messages will come up, as all of them are gathered and returned at once when promise resolves. Nice, huh?

What if we want to test an error message? It's just as easy as catching the error and evaluating the response in the test case:

// Pizza CLI test: Error testingconst expect = require('chai').expect;
const cmd = require('./cmd');
describe('The pizza CLI', () => {
it('should print the correct error', async () => {
try {
await cmd.execute('path/to/process', ['--sausage']);
} catch(err) {
expect(err.trim()).to.equal(
' Invalid option --sausage'
);
}
});
});

What we have so far

We've defined a way to spawn child processes into a mocha runner, and run test suites against it. We have a nice promise wrapper around the process output (and error output) and were able to write integration tests in a friendly way, that behave exactly as the real program. We shouldn't have any surprises when an actual user tries to use the tool.

In part 2, we'll define a way to simulate user input and test for specific scenarios that alter the behavior of the program's output. If you feel really curious about the final implementation, check the gist here. Many thanks to

again for his technical input.

Thank you for reading. See you on next entry!

--

--