First Steps in Frontend Testing with TDD/BDD, Part II

Ariel Herman
4 min readApr 13, 2018

--

This post is a follow-up to my previous post, which researched the reasons for TDD/BDD in development. Now, assuming you already have a project with a package.json file to manage your dependencies, here’s a step-by-step guide on how to set up unit tests with mocha in a separate directory from production code. (If not, run npm init and accept defaults to add mocha as default for tests in your package.json)

  • Do a global install of mocha with npm install -g mocha if you haven’t already. This will allow you to run mocha test in the terminal
  • Add mocha to project withnpm install — save-dev mocha chai (as a development dependency, discussed in my previous post)
  • All tests should be located in one directory at the top level of your project called “test” (must be all lowercase test, not “tests” or “Test”!)
  • You can then run npm test and mocha will automatically run all tests for you. If for whatever reason you only wanted a specific test you could run mocha test/[test_name].js (or npm test test/player_test.js without the global install of mocha)
  • Create a file inside the test directory called main_test.js to test. Running npm test should now show “0 passing”
  • Within main_test.js, begin setting up your test suite. A suite is a collection of unit tests that are all related because they test similar parts of code base.
  • Create a different subfolder at the same level as test that will store your production code files. (In this example mine is called game_logic because it was from an implementation of the game battleship)
  • Each test starts with .describe(), which takes a string description of a function and an anonymous function callback that acts as a wrapper for all the unit tests you will include in the suite
  • Write a “sanity check” to make sure mocha is hooked up properly:
// require chai so that you can use its .expect() fn in your specsconst expect = require("chai").expect;// TEST SUITE:// .describe() is a wrapper for all the specs in the suitedescribe("Mocha", () => {// Test spec (unit test):  it("should run our tests using npm", () => {// .ok is an assertion method for chai that tests whether a value is truthy  expect(true).to.be.ok;  });});
  • The above test should pass when you run npm test. From there you can require/export functions as needed with the module export pattern from your production code
  • An example of a “complete” spec:
describe("fire", () => {
const fire = require("../game_logic/ship_methods").fire;
let targetPlayer;
it("should register damage on a given ship at a given location", () => { targetPlayer = {ships: [{ locations: [[0, 0]], damage: [] }]};
fire(targetPlayer, [0, 0]);
expect(targetPlayer.ships[0].damage[0]).to.deep.equal([0, 0]);
});
  • Remember to only test for one thing at a time. If something seems too hard to test meaningfully, try to think about whether you can uncouple tight logic in your thought process or code you’ve already written. Once your code is written and tests are passing you can refactor your test code.

Refactoring

  • Setup: use .before() to do a task before the entire suite, .beforeEach() before each spec
  • Teardown: Leaving unwanted test variables floating around could interfere with later tests. You might need teardowns if your tests set up a pretend database or interact with the DOM. If you’re relying too heavily on teardown, double check that you’re testing the right kind of function. Use .after() and .afterEach()
  • Handle edge cases by throwing errors in your production code and using expect(someFunc).to.throw(Error) in your tests. Throwing errors is good for you and for other devs working on the same project as a way of denoting essential functionality/parameters
  • Mocha reporter: mocha --reporter min shows full error report details only for failing specs. mocha --reporter markdown exports your tests into MD format so that you can use them in Github as a jumpoff point for writing documentation of your project! You can also add your favorite reporters into your package.json:
"scripts": {
"test": "mocha --reporter nyan"
}
  • Outlining Tests: Specs without the second callback argument will be shown as “Pending” when you run npm test. This is helpful if you know you want to add a certain test but will write it later. They can also be written by .xdescribe() or .xit()
  • Watching Tests: You can write custom mocha --watch commands whenever you plan on working on one particular file a lot so you don’t have to switch back to the terminal to run npm test constantly. The first argument describes the tests you want to run: all the tests in the ./test directory. The second argument describes the files you want to watch for changes: all the files in the current directory ./. You must include a . at the start of your file path, or Mocha will get confused on which files you’re looking for. For example:
    --watch ./test/game_test.js ./game_logic/game_instance.js
    You can also save a general --watch command in your package.json to run your tests all the time. For example:
"scripts":  {
"test": "mocha",
// to run all tests in test directory and watch for changes in all files in the current directory :
"test:watch": "mocha --watch ./test ./"
}

Then you can run npm run test:watch in the terminal. You can also make more custom configurations like test:watch:playerMethods in your package.json file and so that you can only monitor specific suites

That’s definitely a crash course but is a good head start on writing your own test suites! I left out the part about writing mocks and stubs as well as testing asynchronous functions, which is a story for a different post. Next up in this series I will try my very first soup-to-nuts TDD/BDD vanilla JS app. The final stage would be writing tests for React components, perhaps with Enzyme.

Resources

Chai Docs
Team Treehouse: Javascript Unit Testing (Video)
First Steps in Frontend Testing with TDD and BDD

--

--