Testing Hapi Services with Lab

A primer on Lab

Fionn Kelleher
6 min readMar 30, 2014

UPDATE: This guide is extremely outdated!

I’m currently writing a book on hapi for O’Reilly called Getting Started with hapi.js. The book covers all of hapi’s main aspects, and promotes testing throughout using lab. I’ll be keeping it up to date with new versions of hapi, so if you’re looking to master the framework, this book could be for you. It’ll be available on O’Reilly’s website as well as Amazon in the near future.

hapi core developers have also written Developing a hapi Edge — if you’re looking to get started now, it’s worth taking a look at too!

It’s important to test software thoroughly to uncover bugs and potential edge cases. The same goes for HTTP servers, though it’s something that isn’t done as much as it should be. Testing HTTP servers and APIs can be a time consuming task, and making sure every little nook and cranny is tested to work as it should, so you don’t end up having things break when moving into the production stage. Fortunately, testing Hapi servers is an easy task.

In this tutorial, we’ll start by writing tests for our API endpoints and subsequently implementing them using Hapi. We’ll be testing the “users” endpoints of a mock “social network” API.

A Clean Slate

Create a new directory for our API, and execute the following commands to create a package and install both Hapi and the Lab testing library. When prompted for a test command, set it to “./node_modules/lab/bin/lab -c”.

npm init
npm install hapi lab joi --save-dev

The only implementation code we’ll write for now is a server with no endpoints. Create index.js with the following code:

var Hapi = require("hapi");var server = new Hapi.Server(8080);if (!module.parent) {
server.start(function() {
console.log("Server started", server.info.uri);
});
}
module.exports = server;

The if (!module.parent) {…} conditional makes sure that if the script is being required as a module by another script, we don’t start the server. This is done to prevent the server from starting when we’re testing it; with Hapi, we don’t need to have the server listening to test it. We’ll explain how this works later on.

Time To Experiment

Create a directory named test, for — you guessed it — our tests. Now, we can get start to scaffold our API and write tests for it.

Testing Our Main Endpoint

Let’s create a test suite for the “users” aspect of our network. In the test directory, create users.js containing the following:

var Lab = require("lab"),
server = require("../");
Lab.experiment("Users", function() {
// tests
});

We require the Lab testing library as well as our server — which won’t be started automatically for us. The Lab.experiment(…); call sets up our “experiment”. This is equivalent to the describe pattern in other unit test libraries such as mocha; if you want to use this behaviour, assign Lab.experiment to a variable named describe.

Inside the closure passed to our “Users” experiment, we can place our tests, or other sub-experiments.

Within the “Users” experiment, lets add our first test to assess route “/users”.

Lab.test("main endpoint lists usernames on the network", function(done) {
var options = {
method: "GET",
url: "/users"
};

server.inject(options, function(response) {
var result = response.result;

Lab.expect(response.statusCode).to.equal(200);
Lab.expect(result).to.be.instanceof(Array);
Lab.expect(result).to.have.length(5);

done();
});
});

Execute npm test and we can instantly see the tests have failed.

As well as telling us what test failed, as well as a diff between the actual value and the expected value of the assertion. Lab will bail out and halt any remaining tests at this point, allowing us to fix our server to pass the test.

Notice also how we never called server.start() — Hapi includes a request injection framework to simulate requests on the server without it actually having to be bound to an interface. This is not only useful because we no longer have to worry about what port the server binds to, but also because when a request is injected, we don’t get any overhead from creating a socket connection.

Switch back to index.js and add a route as follows, and run the tests again:

server.route({
path: "/users",
method: "GET",
handler: function(request, reply) {
reply({});
}
});

Our first assertion has passed this time, however, since our route returned an object, our second assertion failed expecting the reply to be an Array.

Let’s change our reply to be an Array rather than an object, since that’s what our test expects.

reply([]);

Running the tests again, we’re faced with one more problem:

We’re expecting to have five users in the Array, so lets add our container to store our users and fill it up with some details.

Grab the JSON object over here and save it as database.json. Require it in index.js as so:

var database = require("./database.json");

Our users route will return a list of usernames, so modify our handler to the following:

reply(Object.keys(database));

Re-run the tests, and low and behold…

Lab.expect() is a reference to the chai assertion module; you can find a list of possible assertions in chai’s documentation.

Now, we’re sure that requesting our main endpoint gives us the correct results. But this API is pretty useless this way, so lets flesh it out some more. Grab the rest of our routes over here and add them to index.js. All of these routes are working perfectly; there aren’t any nasty surprises (I tested them myself!).

Testing Valid User Creation

We now have a PUT route for /users/{username} that we can use to add a new user. It accepts a payload with the following values:

  • full_name (string)
  • age (integer)
  • image (string)

Let’s add a new test case for this route.

Lab.test("creating valid user", function(done) {
var options = {
method: "PUT",
url: "/users/testuser",
payload: {
full_name: "Test User",
age: 19,
image: "dhown783hhdwinx.png"
}
};

server.inject(options, function(response) {
var result = response.result,
payload = options.payload;

Lab.expect(response.statusCode).to.equal(200); Lab.expect(result.full_name).to.equal(payload.full_name);
Lab.expect(result.age).to.equal(payload.age);
Lab.expect(result.image).to.equal(payload.image);
Lab.expect(result.count).to.equal(0);

done();
});
});

Running the tests, we’ll see that they executed successfully.

Not everything is tested so far, so I’m going to leave it up to you to test the rest of our endpoints. Here are some things you’ll want to test:

  • User is actually added after PUT /users/{username}
  • PUT /users/{username} when user already exists returns error object.
  • GET /users/{username} on existing user returns user object.
  • GET /users/{username} on non-existing user returns error object.
  • DELETE /users/{username} returns an object containing a key, “success”, set to true.
  • User is non existent after DELETE /users/{username}
  • DELETE /users/{username} on non-existing user returns error object.

Testing Plugins

If your Hapi server utilises plugins, it may be useful to create a test case to make sure they can be required successfully, to test if options are honoured, or behavioural tests in the case of the plugin exposing properties or functions.

I recommend testing plugins individually rather than testing the server with the plugins loaded. A typical test case for a plugin could look like the following:

var Lab = require(“lab”),
Hapi = require(“hapi”);
Lab.experiment("Lout plugin", function() {
var server = new Hapi.Server();
Lab.test("Plugin successfully loads", function(done) {
server.pack.require("lout", function(err) {
Lab.expect(err).to.equal(null);

done();
});
});

Lab.test("Plugin registers routes", function(done) {
var table = server.table();

Lab.expect(table).to.have.length(2);
Lab.expect(table[0].path).to.equal("/docs");
Lab.expect(table[1].path).to.equal("/docs/css/{path*}");

done();
});
});

Code Coverage

Our test script outlined in package.json passes the “-c” flag to Lab, enabling code coverage checks. Code coverage gives you an insight as to how broadly your code is actually tested by using the Esprima ECMAScript parser to compile every possible execution path of your code, then tracking execution to ensure that your tests have covered every possible scenario.

Code coverage is something that people tend to leave out, and isn’t included in many test libraries by default. I highly recommend using Lab’s code coverage functionality, as keeping a high percentage of coverage will result in better tested code and fewer bugs.

--

--

Fionn Kelleher

17. Programmer, student, programming student. Author of “Getting Started with Hapi.js” (O’Reilly).