Automated game system testing using Karma and Travis CI

Pontus Alexander
ReCreating Megaman 2 using JS & WebGL
9 min readAug 25, 2016

A while ago I wrote about successes in testing and the ending of that article I talked about getting these system tests to run on Travis CI. Travis CI is a project that offers free build pipelines for open source projects on GitHub for example.

To make that even clearer; if you host a project publicly on GitHub, Travis CI can be added with a few clicks and run whatever you want on your project before merge to make sure no preventable broken code is merged to master.

It took some fiddling to get it working, but I now have an automated unit + integration + system test suite that I can run both locally and on continuous integration systems.

In the end it was not very bloody, but there were a couple of non-default steps to get it working properly.

Karma Runner

Before I tried to get the tests to run automatically I could already run them manually in the browser. Simply loading up a HTML file that bootstrapped the game engine plus the test spec files. What was missing was making it start automatically from a signal such as the command line and propagating successes and failures to Travis so that GitHub can prevent broken pull requests from creeping into the master branch. I also wanted to get code coverage on tests run in the browser.

Turns out Karma Runner, a JavaScript test runner could help me with this.

I started by installing the dependencies in NPM.

npm install karma karma-chrome-launcher karma-coverage karma-mocha  karma-mocha-reporter
  • karma Core Component.
  • karma-chrome-launcher Program that launches Google Chrome Browser.
  • karma-coverage Karma wrapper for istanbul, a code coverage tool I already were using for unit tests.
  • karma-mocha Karma wrapper for mocha, the test framework used for writing tests.
  • karma-mocha-reporter Prints test results in the same style as mochas default reporter.

To be honest, I found it awful to configure Karma from scratch. The documentation reads like it is straight forward. I felt like there was a lot of assumptions that might not be true for all projects and I had to solve a multitude of unmentioned problems in order to make it work.

I really dislike that this article reads like Karma bashing. I know I’m not using Karma according to the original intentions, so the reason I got these problems to begin with are my own fault to a degree.

The full config as of the writing of this post can be found here: https://gist.github.com/pomle/12db2dee84935c326c5f7d76f275162e

Specify included files

You have to specify all files that your test needs so that Karma knows about them. This is because Karma runs its own web server from which is serves all files in your project from a magic endpoint /base/. In the end I think this is not a bad thing, but it might require you to do some hoop jumping. Luckily it supports wildcards *.

Prepare game engine files and a WebGL mock.

The first thing I do is prepare an array of files that are dependencies for running the tests. The dependency const is not a Karma thing, it is what I use to delineate between test dependencies and test specs.

Line 4 tells Karma it needs to load the THREE.js library.

Line 5 includes a small WebGL mock so that we can run the code on platforms that do not render.

Line 7 loads a JSON with a list of all game engine files and prepends them with src/. I use this script-manifest.json instead of require because I really dislike having a build step to test my code and this way I can run the vanilla code in the browser immediately on change with nice debugging and without source maps.

Line 9 just adds these files from the manifest to the dependency array.

Notify Karma about files it needs to serve.

On line 12, the first line of the blob testFiles above, I tell Karma that I have a bunch of files in src/resource/ that it needs to serve and watch. Watch meaning that if any of them change, the test suite should rerun if Karma is currently running.

Second and third line I say the same thing about some test fixtures. Fixtures means that they are not *the* tests but used *for* testing. It could be static levels that only contain a very limited set of components for example ladders and solids if I need to test ladder behavior.

Then lines 15–18 includes more test support files that does not need to be watched like expect.js the assertion library, and sinon the mocking library.

Lastly on 18 and 19 I include the test specifications — the code that actually contains the tests that should run.

Setup test proxies

This was the most painful thing to get right. Since I want to be able to run the tests manually when I develop them so that I can see watch the game render, I can’t assume that /base exist. Basically I could set up a web server in my manual environment to server files from /base but the test runner should work for you, and you should not work for the test runner. So in order to be able to run the same code in two different browser environments I had to tell Karma that /src and /test actually is in /base. Although they aren’t really, it’s just what Karma wraps everything in.

Specify proxy endpoints, or we could just call them redirects.

As I understand it, the reason Karma is designed like this is because it serves its own resources to the browser from its web server and namespaces all other resources to /base. I would prefer if it namespaced it’s own resources instead so it didn’t conflict with any user files. Understanding why this was happening was an ordeal.

Specifying custom DOM

Out of the box, Karma will run your test in a blank DOM. If your tests require anything from the DOM, you will need to supply your own HTML file that Mocha runs in.

This makes absolute sense, but I couldn’t find any requirements for the custom HTML file. I expected any dependencies to be automatically injected or loaded, but a set of very specific markup and script is needed in this file in order for it to work, otherwise running the test will just silently do nothing.

The elements I needed was:

<div class="game">
<div id="screen" style="width: 640px; height: 360px"></div>
</div>

The rest is things Karma expects to be there. I had to go into the Node module and find the default HTML file to figure it out.

Specify test framework and reporter

This simply tells Karma what how tests are written and what reporter to use. These values are referencing the packages karma-mocha and karma-mocha-reporter that we installed earlier.

Now your custom test suite should run with the command below.

karma start .karma.conf.js — single-run

This should open a Chrome window and you should see something like the following.

Example test output using Mocha reporter. Output has been truncated.

So this is pretty neat. Now we can automate the running of tests in a browser environment.

But hold that champagne!

We need to enable code coverage and make this run in Travis CI.

Code Coverage

Activating code coverage involves a couple of steps. Besides installing the karma-coverage package we need to

Tell the coverage system to pre-process our files.

Pre-processing inserts instrumentation in files; adding inert code to them so that we can know what parts of the code actually ran.

Then we tell the coverage reporter what reports to write and where to write them.

And we add ‘coverage’ to the reporters array.

Now when we run Karma again with

karma start .karma.conf.js — single-run

we should see “Writing coverage to /path” printed at the end.

I ran into several issues with coverage reports being empty. If this is the case, double check the preprocessors arguments.

Ensure the preprocessor have actually preprocessed the files you expect by setting logging to debug…

… and checking the output to list the correct files…

…and make sure your test did not timeout.

A timed out test will look like it succeeded and write empty code coverage.

When a test times out, everything will look fine, and the coverage module will write coverage as if everything is fine. Just that the report is completely empty, and it might drive you up the wall.

An empty, seemingly correct coverage report.

It would be nice if the default behavior was to signal when the test is done so that a timed out test equals an error. Seems as if there is no way to differentiate between a timeout and a complete without doing some more hacking.

To increase the timeout, use configuration property browserNoActivityTimeout.

If you are running code coverage for both unit tests in Node and other tests in Chrome, you will now have two coverage reports. These can be combined using istanbul very simply with

istanbul report --dir ./test/coverage/sum

This command looks suspiciously argument free. That is because there is lots of magic build into just running “istanbul report”. It automatically scours your directory tree from where you were running and looks for all files named coverage.json and combines them into a new report. In the command above we only tell istanbul to write the summary report to the ./test/coverage/sum dir.

Running on Travis CI

In order to run a decent version of Chrome on Travis we need to do two things.

  • Installing Chrome
  • Faking a display

Installing Chrome requires sudo, so you have to put sudo: required in your .travis.yml config file. Below I have inlined the above scripts in my config.

My .travis.yml requesting sudo, installing recent Chrome and faking a monitor.

Lastly, we need to run Chrome with a special argument on Travis, so in our Karma config we should specify one last thing.

The above sets up a custom browser profile arbitrarily named “Chrome_travis_ci” and tells Karma to use that profile if the environmental variable TRAVIS exists.

Okay, that was a lot of work. But in the end we got an invaluable benefit. Full, real-environment, automatic tests with code coverage reporting. True fearless development.

So what does this look like in GitHub.

Before successful build

When test result not in, merging is prevented.

After successful build

On successful test, I get latest code coverage report and merge button goes green.

And you can check a successful build in Travis CI here: https://travis-ci.org/pomle/megamanjs/builds/155102279. As you can see there is tons of debug output because I run Karma at debug level logging. This is so that if a build fail and its not a test case, I want to have the greatest chance of detecting why without having to run it locally.

Okay, that was what, like 45 feet worth of blog post. I really hope this helps someone. Thank you for reading!

--

--

Pontus Alexander
ReCreating Megaman 2 using JS & WebGL

I’m a software engineer, living in Stockholm. I write about random things that interest me.