Blazing fast tests with Firebase: 15x quicker

Shane O'Sullivan
Promise Eng
Published in
8 min readApr 29, 2019

At Promise, we take the reliability and stability of our systems very seriously. If we ship a bad bug, one or more people may end up in prison, given that our tools are built to help people stay out of the justice system.

One way that we ensure the quality of our work is tests, and particularly full integration tests against our GraphQL API.

How did we get test runtime from almost 30 minutes to under two?

Photo by Tim Gouw on Unsplash

We build on top of Firebase and its backing store Firestore. It’s a NoSQL database that uses a concept of Documents and Collections of documents. We found that we had to run tests against it sequentially, as the post-test tear down step had to wipe the database for the next test to run reliably. This resulted in our test runs becoming slower and slower, eventually taking almost 30 minutes to run.

The key realization that unlocked this huge performance boost was that Firestore is organized in a hierarchical manner, with the top level document Collections being placed at the root. What if we made it so that all the root collections could be created, read, updated and deleted from somewhere deeper in the hierarchy?

If each individual test has its own little sandbox inside the same Firestore database, the need to do an expensive tear down operation goes away, and tests can also run in parallel without overwriting data from other tests running at the same time. Just as importantly, you can run all your tests against a single Firebase project simultaneously, rather than having to build and deploy to many individual servers, keeping your CI system very simple and easy to maintain.

The resulting database will look something like this image

Note the top level __tests Collection. Inside this we have one Document per test spec (in Jest, each it invocation gets its own Document). Under those documents are the Collections that were formerly at the root of Firestore.

Setting up the Test Framework

To achieve the above result, we need to shim the Firestore object to redirect the documents. There are two methods that need to be overridden, Firestore.doc and Firestore.collection. As you can see in the code below, we store the reference to the original versions of those functions, and call those with the shimmed functions with modified parameters.

Each test must use a new instance of the Firestore class, so put that in the beforeEach function.

The initializeTest function uses the firebase-admin module to create a new Firebase app for each test, using a unique ID for each test.

Now each test can use that instance of the Firestore class, and all test results will be properly sandboxed!

Actually there’s a lot more to do…

So before we all pat ourselves on the back, there’s actually quite a bit more of work to do. The general pattern for using Firestore on the backend is to call admin.firestore() and go read/mutate your data. However, this will revert back to putting all the data at the root level.

To make tests work in parallel, we need to pass the instance of the Firestore class to every function that needs it, rather than relying on the default singleton instance. Therefore, a function that looks like this:

becomes like this:

At Promise, given the large number of tests and fairly significant code base size, this involved multiple days of mind numbing manual work to rewrite all functions and call sites. But it was more than worth it for the productivity gains.

Working with APIs, like GraphQL

All of the above will work just fine if you are directly testing the functions in your code base that use Firebase and are passing the Firestore class to them, but what about when you call APIs? You can’t pass your shimmed class over that boundary.

At Promise we use a GraphQL middleware layer, and our solution is to make that understand our testing approach. When initializing our tests, we create an ApolloClient instance that passes the test_uid parameter to the server in the header.

We then extend the initializeTest function to also create the ApolloClient and provide it to each test.

For our GraphQL server, we use the Node moduleexpress-graphql to handle the HTTP requests along with the express module. express supports adding arbitrary middleware functions to handle the request and response, and we use it to decorate the Request object with additional data. express-graphql by default uses the Request object as the GraphQL Context that is passed to each GraphQL Resolver.

The image above shows the middleware function that looks for the test_uid header on the request, and if it’s there, it shims the Firestore class and adds it to the request context, otherwise it uses the default singleton Firestore class. Finally, in the GraphQL server, add the function above to the Express app.

You can then use it in a GraphQL resolver like below.

Results

Now that it is possible to run tests in parallel, the speed of the tests depends on how many cores the machine running it has. The original time to run tests on a 2018 15" Macbook Pro was 28 minutes. This machine has 8 virtual cores, and by default Jest will use (N - 1) cores, so with 7 cores running simultaneously, our tests take just under 4 minutes. On the cloud VM we use to run our test suite, we bumped it up to 16 cores, and the tests run in 1 minute and 45 seconds.

Getting them to run faster is possible, but given that some individual test suites take almost a minute and a half to run, work would be required to break down those suites into smaller collections of tests, then bump the machine to 32 cores. As our test suites grow, we’ll likely do so, but under two minutes is the goal for now, and we’re pretty happy with here we got!

Post Credit Scene

The code samples above are not complete, and if you attempt a similar feat the following is useful information.

Firstly, Jest has issues running more threads than there are cores. We tried telling Jest to use 15 concurrent threads on an 8 CPU Macbook, and it was much faster, but strange errors would crop up that made it seem that memory was being shared in the node processes.

Secondly, above we described how each function that needs to access Firestore data will need to be passed an instance of the Firestore class. Well, who passes that in when running normal production (non-test) code? The answer is that the Cloud Function running at the top of the call stack passes a reference to admin.firestore() to whatever function it calls, which can then pass it down the call stack chain.

Thirdly, you can do so much more with this system (which we are doing internally) now that you have a document representing each individual test stored in your test database! You can:

  • Set a created_date attribute on the document, to make it easy to find in your Firestore dashboard.
  • Set a test_run_uid on it, unique to each overall test run, so you can filter the dashboard to just show test data from a single test run.
  • Use a Jest Reporter to store the actual results of the test on the document. When it comes time to generate an overall report from your CI system, you will no longer be reduced to parsing interleaved logs potentially from many different machines to highlight errors, you will have them all nicely separated out and co-located with the test data, ready to be turned into whatever type of high-signal report you choose.

Fourthly, there is an alternate approach that one could take rather than replacing the functions on the Firestore instance. That class has a private field called _referencePath that defines the root path of the database. Another possibly cleaner approach would be to replace that instead of overriding the doc and collection functions. In our usage we found that there were cases where we needed to break out of the __tests collection, e.g. for mass deletions of data, and therefore we prefer the additional flexibility of overriding the functions. However, you may not need that, and simply replacing this one variable is certainly cleaner.

Finally, since we’ve removed the time-consuming tear-down step, you’re going to end up with hundreds of thousands of old documents in your database, forever. We recommend setting up a Cloud Function to run on a regular basis to delete old documents. We keep tests results around for a day so that we can investigate any CI failures, therefore we do a recursive delete in the __tests collection of any document with a created_at value more than a day old. The cron-like functionality can be set up using Google Cloud’s PubSub architecture, or any other cloud cron system you prefer.

Deleting arbitrarily nested data in Firestore is surprisingly tricky and unsupported by the SDK, so you can use our handy open source FirestoreDeleter component if you so wish.

Thanks for reading!

You’ve made it this far, so as a reward, we’ll let you know that Promise is hiring! Go to https://joinpromise.com/jobs and take a look and the open positions, or hit us up at jobs@joinpromise.com.

--

--