Remix’s Tech Stack: Jasmine

Jan Paul Posma
6 min readSep 16, 2017


This is the latest in a series about Remix’s tech stack. We’ve worked hard on setting up our ideal stack, and we’re proud of what we’ve built. Now it’s time to share it.

How We Configure Jasmine

Testing front-end code is tricky. It’s full of asynchronicity (back-and-forths with the user and the back-end), browser-specific behavior (and bugs), visuals (the correctness of which can be fuzzy), and state (management of which is still much less mature than on the back-end, where we have fantastic decades-old databases). Therefore, it’s extra important to have good automated testing tools for the front-end.

At Remix we use various tools for this, which we’ve covered in the Preventing Regressions article. This time we’ll focus on Jasmine, our unit testing tool. Over the last few years we’ve built some configuration on top of it, all for very specific reasons, which we’ll look at in detail.

Our full setup is available as open source here. Maybe some day some of this configuration can be added to Jasmine by default! 📈

Random Test Order

First of all, we want to run our tests in random order, to prevent dependencies between tests. (This is one of the reasons we use Jasmine and not Mocha, which doesn’t support random test order.)

// Prevent dependencies between tests by randomizing tests.

The downside of this is that tests that are dependent on each other will sporadically fail based on the test order, which can be hard to debug. When you see such a failure in CI, you want to be able to run the tests locally in the same order so you can debug the problem. For this we can print the seed to the console:

// Generate our own seed so we can print it in the console,
// for CI debugging.
const seed = jasmine.getEnv().seed() ||
String(Math.random()).slice(-5); // Same seed function as Jasmine
console.log(`Jasmine seed used: ${seed}`);

Now we can just plug the seed into the URL like ?seed=12345. 🎉

Asynchronous Behavior

The trickiest thing in testing front-end code is probably dealing with asynchronous behavior, so we’ve spent a lot of time on setting this up right. There are two main approaches to this:

  1. Keep the application code asynchronous, and in tests wait until the code is finished running before making assertions.
  2. Stub asynchronous browser functions by introducing an artificial clock that we can move forward arbitrarily in tests to simulate time moving forward.

(1) has the downside of having to wait in tests for application code to finish. It can also be difficult to know exactly when it has finished—you have to always pass through a callback or Promise for the test to use. So we went with option (2).

First we set up fake clock and date, which is built into Jasmine. This replaces functions like setTimeout and new Date(). It’s important to do this before any libraries and polyfills are loaded, as they can store handles to those functions.


Then there are some other asynchronous functions that Jasmine currently doesn’t replace, so we replace them ourselves. One example is setImmediate, which we can replace by a timeout with 0 milliseconds:

window.setImmediate = fn => window.setTimeout(fn, 0);
window.clearImmediate = id => window.clearTimeout(id);

Some libraries use it internally, in which case you’d have to call jasmine.clock().tick(1) in your tests. Another example is requestAnimationFrame, but there we want to replace it with at least 1 millisecond per frame, so we can step through it if we need to:

window.requestAnimationFrame = fn => window.setTimeout(fn, 1);
window.cancelAnimationFrame = id => window.clearTimeout(id);

We also install jasmine.Ajax to stub out calls to the server:


Now there should not be any asynchronous waiting in tests any more! So we can tighten the timeout on asynchronous tests (in case you still want to use that syntax):

jasmine.DEFAULT_TIMEOUT_INTERVAL = 10; // milliseconds

Asynchronous Test Example

To see what an asynchronous test looks like with this setup, let’s try to test this function:

function ajaxCallWithTimeout(url, timeoutMs, onFinish, onTimeout) {
let done = false;
const xhr = new XMLHttpRequest();
xhr.onreadystatechange = () => {
if (!done) onFinish(xhr);
done = true;
};'GET', url);
setTimeout(() => {
if (!done) onTimeout();
done = true;
}, timeoutMs);

This is what the happy-path test would look like:

it('calls `onFinish` when the request comes back in time', () => {
const onFinish = jasmine.createSpy('onFinish');
const onTimeout = jasmine.createSpy('onTimeout');
ajaxCallWithTimeout('test.json', 100, onFinish, onTimeout); jasmine.clock().tick(99); // Move clock forward by 99ms.
jasmine.Ajax.requests.mostRecent().respondWith({ status: 200 });
jasmine.clock().tick(20); // Move clock forward some more
expect(onTimeout).not.toHaveBeenCalled(); // Still not called.

Hurray for arbitrarily manipulating time! ⏰

Tightening Asynchronous Tests

We noticed that we often want to make sure that at the end of a test nothing changes if you move forward time a bit more, like the last two lines of the test above. Typically this means that no more callbacks should be called, and no more Ajax requests should be made. We haven’t yet figured out how to do the first part (no more callbacks), but we did tighten against any more Ajax requests:

afterEach(() => {
jasmine.clock().tick(1000000); if (jasmine.Ajax.requests.count() > 0) {
fail('Requests were made after the test.');
if (jasmine.Ajax.stubs.count > 0) {
fail('Stubs were set after the test.');


One more source of asynchronicity is Promises. According to the spec, handler functions should execute asynchronously. Because of this we use a Promise polyfill that internally uses setTimeout, even if the browser we run our tests in supports Promises natively.

We can even write a test to make sure Promises use the Jasmine clock:

it('uses the Jasmine clock', () => {
const onThen = jasmine.createSpy('onThen');

Tightening Tests

We try to tighten our tests as much as possible in order to catch as many bugs as possible, like how we tightened asynchronous tests above. Another example is not allowing logging to the console in any way. This catches errors and warnings from libraries, like React’s PropTypes. When legitimately logging to the console, you can still stub out the console method, which we do in a few places. We also ignore some logging by tools:

const oldConsoleFunctions = {};
Object.keys(console).forEach(key => {
if (typeof console[key] === 'function') {
oldConsoleFunctions[key] = console[key];
console[key] = (...args) => {
// Detect Karma logging to console.error
// by looking at the stack trace.
if (key === 'error') {
const error = new Error();
if (error.stack &&
error.stack.match(/KarmaReporter\.specDone/)) {
// Don't fail tests when React shamelessly self-promotes.
if (args[0].match && args[0].match(/React DevTools/)) {
oldConsoleFunctions[key].apply(console, args);
throw new Error("Don't log to console during tests");

Another way to tighten tests is to make sure there are no DOM elements from tests left on the page after running a test, as that could leak state between tests. Since we always mount elements on <body>, we can just check if its number of children have changed:

let numberOfElementsInBody;
beforeEach(() => {
numberOfElementsInBody = document.body.childElementCount;
afterEach(() => {
if (document.body.childElementCount !== numberOfElementsInBody) {
throw new Error('Forgot to clean up elements in <body>');

This is an assertion on global state, to make sure it doesn’t leak between tests. The alternative would be to clear out the global state before each test (e.g. having a special <div> that all DOM elements get mounted into, and clearing it out before each test), which also works.

These are just some examples we came up with as we developed our product, but the principle of tightening tests can be more widely applied to any invariants you might have in your application. For example, on the backend you could check complicated database invariants that cannot be expressed as table constraints.


We showed how we configure Jasmine, but the underlying ideas are more widely applicable. For example, on the backend we use RSpec, which supports random test order, we stub out external requests using WebMock and Puffing Billy, and we tighten tests by running database invariant checks after each test and not allowing any warnings to be logged.

If you have any suggestions for how to configure Jasmine, be sure to leave a comment below. And of course we welcome contributions to our config! 🌟

If any of this interests you, take a look at our open roles. We care about livable cities even more than developer tools. :)



Jan Paul Posma

Materialistic minimalist. Optimistic realist. Creative copycat. Rationalises believing in paradoxes.