Unit testing on Lambda

Mathieu Tamer
precogs-tech
Published in
6 min readDec 7, 2017

A while ago, I attended an AWS user group meetup where one of the speakers explained the challenges he faced when migrating to a serverless architecture. He felt that since they migrated to AWS Lambda, developpers became reluctant to write unit tests.

As a conscientious developer (well, rather a developer obsessed about unit testing), the pill was hard to swallow. The reasons he claimed were real issues (e.g. very little code so no need to test, difficulties to mock AWS services or to simulate Lambda), but none of them should lead to stop writing unit tests. We just have to adapt our unit tests to this new approach of programming. At least, that’s what I’ve tried to do. And this is what this article is about.

The importance of testing

I won’t say to much about how a tested code is easier to maintain and how quick a bug can be fixed when you find it at early stage. Microservices, even if there are only a few lines of code, also need to be tested. Microservices are like libraries, and I personally never use a non-tested lib. So I apply the same principle in my serverless architecture, and I write tests for all my AWS Lambda functions (no matter how small it may be).

Always write the test leading to a bug before fixing it.

One of the key principle I try to apply is test-driven development. I believe that TDD is not just a buzz word: writing tests before coding simplifies a lot the code writing stage. When I start coding, I already know what the final code must look like. And clearly, writing a test before coding is as easy as writing it after.

I also try to write the test corresponding to a bug before fixing it. Thanks to that, I ensure that this bug can never happen again. Plus the test is simple to create: since I have a bug, I have an example that leads to this bug.

Tests at Precogs

Before I follow on the issues we faced and how we tried to solve them, I think it’s important to provide some context: our microservices are written in Node.js, we deploy them on AWS Lambda, and our database is PostgreSQL. We use mocha to run our tests along with chai/expect, and we stub with sinon and proxyquire.

Our stack for tests in Precogs: Mocha + Chai + Sinon.JS

Stub

One of the key point about unit testing is that it’s not integration testing. It may be obvious, but I already found myself saying “no need to stub that, it will also be an integration test: kill two birds with one stone”. But all tests must be quickly rewritable, and mixing unit testing with integration test leads to uncertainty of the failing reason. Especially when I develop microservices, I split my code into minimal functions, I want to ensure that every single function works “regardless” of all its dependencies. That’s why I stub all methods. And in addition, all the stubbed methods are tested. And that’s also true for external libs: I only choose tested libs.

In order to avoid mixed behaviors in tests, I ensure that all my stubs are restored in “afterEach”. I don’t want a stub test to interfere with another module! Another solution with sinon, is to use sandbox for each test.

Example of simple lambda handler | https://gist.github.com/mathieutamer/a65dcc06ac2356ed4360cf7e66b34186
Example of simple test with sinon | https://gist.github.com/mathieutamer/5b28a97dfe92cce27128118b7afe0b4a

Stubing with sinon works quite well and easily for most lib, but for some specific lib (like aws-sdk or sequelize), it can be a lot more complicated. For example, when we tried to stub aws-sdk with sinon we got errors likeTypeError: Attempted to wrap undefined property <aws_property> as function. This is because aws-sdk does not use prototype. We tried different ways (like aws-sdk-mock lib or stubing the request method — which is always called at the end) but we concluded that the simplest way to stub is to use proxyquire. We also use proxyquire to stub “single function” module. And for database tests, we also use proxyquire, in order to stub the connexion and query functions easily.

The use of done

Like I said, we use mocha to run out tests, and we use the done callback. This allows to test async method, and to check all stub calls in one test. But one of the issue we faced was that we often got the message Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. That is caused by the fact that if a test fails while using done and is ran asynchronously, the error is not catched and this generic error is thrown. The solution we found is to use the done callback to pass detailed comments in case of error (e.g.done(new Error('Explanation on the error')) ). Associated with a try/catch in a callback, it allows us to catch the error and send it to done. We also call done with the error in case of unattended success: done(new Error('Should not have succeeded')). Those two tips allow us to have a clear idea of what went wrong in our test, as shown in the following example:

// Test of a lambda handler
index.handler(payload, context, (err, data) => {
try {
// Some tests here...
done();
} catch (e) {
done(e);
}
});

The try/catch solution was not a real consensus at Precogs, because it complexifies the code and slows down the tests. But this is the “best” solution I found to print the correct error for all failing tests and not just a generic error.

An easier way to write the try/catch implementation for promise is to catch the error after the then/catch and send it to done, like in this example:

somePromise(data)
.then(() => {
done(new Error('should not have succeeded'));
})
.catch((err) => {
// Some tests here...
done();
}
})
.catch(done); // It will only be executed in case of a failing test

A test can also fail

Yes, a test can also fail! So, we try to have the most accurate error message in case of failure. Remember, the error message is catched and returned to done callback by the try/catch. We use the chai/expect assertion library (along with sinon for stubs), and we try to test very precise data for each assertion. For example we prefer to use getCall(n) (or equivalent) along with args[n] instead of alwaysCalledWith . We do that because:

expect(methodSpied.firstCall.args[0]).to.equal('foo');

is always easier to debug than:

expect(methodSpied.alwaysCalledWithExactly('foo')).to.be.true;

In the case the spied method is called with ‘bar’, the first one will return AssertionError: expected ‘foo’ to equal ‘bar’ and the second one AssertionError: expected false to be true . The first message, with the context provided by mocha may help solving directly the bug without having to run the test with more logs or different entry data. Moreover, in case of multiple arguments, the “one line” writing will look like:

expect(methodSpied.alwaysCalledWithExactly('foo', 'bar', objectTested)).to.be.true;

But if we change one argument (e.g.'foo' ) in a commit, the whole line will be shown as modified in git. Whereas in:

expect(methodSpied.firstCall.args[0]).to.equal('foo')
expect(methodSpied.firstCall.args[1]).to.equal('bar')
expect(methodSpied.firstCall.args[2]).to.deep.equal(objectTested)

only the first line will be shown as modified.

Also, a custom error message can be given as the second argument to expect , and can be very useful for simple assertions. For example:

expect(methodSpied.called, 'methodName should not have been called').to.be.false;

provides a lot more accurate context in case of a basic test.

Test accurately

When we write tests, we prefer to spend a little more time in order to be more specific (e.g. throwing ‘error function foo’ instead of just ‘error’ while testing how an error is handled). This allows us to ensure that in a multiple error test, it’s the right error that is catched.

Since we stub a lot, we ensure that (at least in case of success) all stubs are called with the correct args. For us, exhaustivity is the key! And if we don’t do that, stubs can lead to false “correct test”. One stub may not be called in case of success, and if the test doesn’t catch this, the function will fail unexpectedly in production. It’s the same with stub arguments, we ensure that all stubs arguments are correct. Moreover, when we stub our database client, we ensure that the query tested is valid, by running it on our test database.

I am convinced that unit testing is the best way to gain time and efficiency when coding. Especially when needing to rewrite exiting code. And even if microservices, and Lambda especially, fundamentally transformed our approach of code, there is no need to abandon them, just re-invent our way of writing unit tests!

Your questions or remarks mean a lot to us, so feel free to comment :)

--

--