Automatic testing with feathers-plus/cli

Unit testers be like: “Looks like it’s working” (*)

Feathers-plus/cli generates unit, integration and client/server tests for hooks, services and authentication. These help give you increased confidence in your test suite.

This article is part 3 in a set of publications dealing with testing in Feathers.

A minimum viable app to test

We need an app that we can generate multiple tests for. Medium articles have already been published describing how to generate services with feathers-plus/cli (cli+), so we will skim through that process here.

Generating our test app

Also generate service roles like we didteams.

Now let’s generate a hook. We’ll name it skipRemainingHooks because, for illustrative reasons, it’ll do exactly what feathers-hooks-common ‘s hook of the same name does.

feathers-plus generate hook

Testing hooks

cli+ can generate both unit and integration tests for hooks. These tests use the popular Mocha test framework.

A template for a unit test is automatically created whenever you generate a hook. By running generate test, you can regenerate this unit test or generate the integration test.

feathers-plus generate test

Once you have the template, just customize it with your tests.

Unit testing hooks

Unit tests for hooks are great because, in addition to simple conditions, they can test

  • edge case conditions,
  • conditions that are more complicated and difficult to trigger,
  • conditions which require time to trigger, or that are date based.

Almost all of the 759 tests in feathers-hooks-common are unit tests. Many other Feathers repos also use unit tests for their hooks.

You can look at this code comparison between the generated template for a unit test and the completed test for skipRemainingHooks.

Let’s look at some of the code in the completed test. You should already know that Feathers passes a context object to hooks. Here we build some of those context objects so we can use them later in tests.

contextBefore = {
type: 'before',
params: { provider: 'socketio' },
data: {
first: 'John', last: 'Doe'

contextAfter = {
type: 'after',
params: { provider: 'socketio' },
result: {
first: 'Jane', last: 'Doe'

contextBefore is a short version of what a before hook would receive for, say, a create call. contextAfter is what an after hook would receive.

We can use these context objects to call the hooks directly, testing many conditions quickly.

const SKIP = require('@feathersjs/feathers').SKIP;

describe('Predicate is not a function', () => {
it('False returns context', () => {
const result = skipRemainingHooks(false)(contextBefore);
assert.equal(result, contextBefore);

it('True returns SKIP token', () => {
const result = skipRemainingHooks(true)(contextBefore);
assert.equal(result, SKIP);

describe('Predicate is a function', () => {
it('False returns context', () => {
const result = skipRemainingHooks(() => false)(contextBefore);
assert.equal(result, contextBefore);

it('True returns SKIP token', () => {
const result = skipRemainingHooks(() => true)(contextBefore);
assert.equal(result, SKIP);

describe('Default predicate is "context => !!context.result"', () => {
it('No context.result', () => {
const result = skipRemainingHooks()(contextBefore);
assert.equal(result, contextBefore);

it('Has context.result', () => {
const result = skipRemainingHooks()(contextAfter);
assert.equal(result, SKIP);

Time for a story

I chose skipRemainingHooks as our example so I could tell you the following story. I wrote that hook for feathers-hooks-common when SKIP arrived as a PR. The PR said it skipped the remaining hooks, the Feathers docs said it skipped the remaining hooks, so … it must skip the remaining hooks. Right?

SKIP was key to the design of the new softDelete2 hook. However weird problems appeared during testing.

It turns out SKIP skips the remaining hooks in the before, after, or error section it’s used in. If you use it in the before section, none of the remaining before hooks would be run, but the after hooks will still be run. Basically the phrase “skips the remaining hooks” was ambiguous.

This was news to all the Feathers maintainers, and the docs are now updated.

This story illustrates the need for integration testing when your hook interacts with Feathers’ infrastructure in a non-trivial way. The same concept applies to testing services and authentication: you should use Feathers infrastructure in your tests rather than try to fake or stub it.

My suggestion is to largely limit your unit tests to code which does not relate to Feathers, and to testing hooks (because unit tests are so convenient there).

You can have increasing confidence in your tests as you move from unit tests, to integration tests, to client/server tests. The tests will get harder to write and slower to run as you move up this hierarchy. However cli+ generates test templates for all of these, significantly reducing the work involved.

In summary, as one notable article says, Write tests. Not too many. Mostly integration.

Integration testing of hooks

Let’s generate a template for the integration test. This will check SKIP works as described above.

feathers-plus generate test, option hook integration

You can look at this code comparison between the generated template for an integration test and the completed test for skipRemainingHooks.

Let’s look at some of the code in the completed test. Our test does not need fake data, so we removed the code reading the fake data generated by generate fakes.

const { join } = require('path');
const { readJsonFileSync } = require('@feathers-plus/test-utils');
// ...

// Get generated fake data
// eslint-disable-next-line no-unused-vars
const fakeData = readJsonFileSync(join(__dirname, '../../seeds/fake-data.json')) || {};

We added a hook which logs calls made to it, along with 2 tests.

let hooksRun;

function setHookRun( name) {
return () => { hooksRun.push(name); };

beforeEach(() => {
app = feathers();
hooksRun = [];

app.use('/test-service', {
async create(data) { return data; }

before: {
create: [
after: {
create: [
// ...
it('SKIP before skips following before hooks', async () => {
const result = await service.create({ foo: 'bar' }, params);
assert(!hooksRun.includes('before2'), 'following hooks are run');

it('SKIP before does not skip after hooks', async () => {
await service.create({ foo: 'bar' });
assert.deepEqual(hooksRun, ['before1', 'after1']);

Voilà! We have confirmed how SKIP works.

“use Feathers infrastructure in your tests rather than try to fake or stub it”

Testing services on the server

Let’s test that the users service encrypts the password and removes it for external requests.

feathers-plus generate test, option service using server

You can see how the completed test differs from the test template.

Since this test mutates the database, we left in the code which checks if NODE_ENV allows database mutation. (Read the seeding data article for more information on protecting production data from unintended changes.)

// Determine if environment allows test to mutate existing DB data.
const env = (config.tests || {}).environmentsAllowingSeedData || [];
if (!env.includes(process.env.NODE_ENV)) {
// eslint-disable-next-line no-console
console.log('SKIPPED - Test users/users.service.server.test.js');


Testing services using client/server

We can run exactly the same tests from a client, rather than on the server. This tests the entire Feathers transport infrastructure.

feathers-plus generate test, option service using client/server

Once again you can see how the completed test differs from the test template, and you can see how the client is configured and authenticated.

async function makeClient(host, port, email1, password1) {
const client = feathersClient();
const socket = io(`http://${host}:${port}`, {
transports: ['websocket'], forceNew: true, reconnection: false, extraHeaders: {}
storage: localStorage

try {
await client.authenticate({
strategy: 'local',
email: email1,
password: password1,
} catch (err) {
throw new Error(`Unable to authenticate: ${err.message}`);

return client;

A little practical matter

It takes a bit of time to spin up a client/server test, mainly because bcryptjs requires some time to encrypt the login password. This is by design as that delay helps impede hacking. Nevertheless, this delay can add up when running multiple client/server tests.

You can speed up your development test sequence by passing the --noclient argument to mocha.

NODE_ENV=test npm run mocha -- --noclient

This will cause the client/server-based tests to be skipped since they include

// Determine if environment allows test to mutate existing DB data.
env = (config.tests || {}).environmentsAllowingSeedData || [];
if (!env.includes(process.env.NODE_ENV) ||
process.argv.includes('--noclient')) {
// eslint-disable-next-line no-console
console.log('SKIPPED - Test users/users.service.client.test.js');


You can run the client/server tests whenever you want, and your CI will run them by default.

Testing basic authentication

The basic authentication test ensures a client can authenticate using either an email/password combo, or a jwt.

feathers-plus generate test, option basic authentication

The generated code calls a utility routine. There is little to customize. cli+ persists the specs for your app in feathers-gen-specs.json, and authenticationBase uses that to figure out what to do.

partial log from authentication.base.test.js

“I’ve haven’t yet written an app where I didn’t have to change the default generated authentication.” — Marshall Thompson, core member, FeathersJS

Testing authentication for every service

cli+ can generate a test to confirm the app’s authentication is configured the way you think it is. So Marshall no longer has to write such tests manually. He tells me he’s very excited by this.

feathers-plus generate test, option auth all services

The generated code calls a utility routine. That reads feathers-gen-specs.json and figures out what to do.

It calls every method of every service. It calls them both from an authenticated and an unauthenticated client. It calls them using all the transports you configured in generate app — REST,, and Primus.

It does a lot of calls … and it checks that each call succeeds or fails as expected based on how each service was generated.

But Marshall customizes his authentication. How do we deal with that? Well, some configuration was added to config/default.json when we generated this test.

"tests": {
"environmentsAllowingSeedData": [
"local": {
"password": "password"
"client": {
"port": 3030,
"ioOptions": {
"transports": [
"forceNew": true,
"reconnection": false,
"extraHeaders": {}
"primusOptions": {
"transformer": "ws"
"restOptions": {
"url": "http://localhost:3030"
"overriddenAuth": {}
  • environmentsAllowingSeedData contains the NODE_ENV values during which the database may be mutated by tests. The Seed article contains more information.
  • password is the name of the password field in the user-entity service.
  • ioOptions, primusOptions, restOptions are the options to pass to Feathers handlers for, Primus and REST.
  • overriddenAuth identifies differences between the current authentication and the generated defaults.

Let’s say Marshall added authentication to users’ create method (no authentication is generated for that by default), he removed authentication from teams’ find method, and he disabled teams’ patch method using the disallow common hook.

He would indicate the changes he made as follows

overriddenAuth: {
teams: {
find: 'noauth', // authentication has been removed
patch: 'disallow' // client cannot call remove
users: {
create: 'auth' // authentication has been added

The authentication test would take those changes into account when it’s run.

partial log from

The tests/ folder

You might be wondering where these generated tests are placed. They are in the tests folder organized in the same manner as your app. For example

Structure of tests/ folder.

In conclusion

This has been a long read. So, first of all, thanks for getting through it.

It’s also been a long road getting to here. We first generated fake data so it could be used in these tests. We then introduced NODE_ENV environments to protect production data from inadvertent changes by the tests. Now, finally, we are able to generate the tests themselves.

And we’re not done yet … so subscribe to Feathers-plus publications to remain informed.

As always, feel free to join Feathers Slack to join the discussion or just lurk around.

(*) Image thanks to Kent C. Dodds.