God, Client Testing is Still Horrible

a pile of moss
6 min readDec 5, 2015

--

A few weeks ago I wrote about how we started using Testling to run our tests in an automated fashion using Xvfb & Chrome in Linux.

We’ve spent the last three weeks continuously tweaking our Testling configuration. First we needed a better way to get the non-commonJS files that we are shimming into the browser. Then we needed a way to specify a bunch of HTML test fixtures. Then we needed to set up a websocket server so that we could verify that the websocket parts of our app were working. (Yes, we could have mocked it with something like proxyquirify or sinon, but I’ve got much better things to do with my time than mock and entire server interface.)

Each of these problems led down a rabbit hole of solutions. Most of the stuff around Testling tells you to invoke it via:

browserify tests/*.spec.js | testling

So like, great! Only if you invoke testling that way, it ignores ANY flags passed in. Awesome! So how do I get it to do what I want it to do? Well, if you look in the repo for Testling, you’ll find a markdown document which explains how to set up some variables in your package.json.

So forget what you read about how to invoke it, the actual instructions are in a markdown file in a directory in the repo. The usage instructions in the README don’t bother to mention that your flags will be ignored unless you’re invoking the command directly.

So now. How do we test our client code, in an automated fashion, without having to change our build system? Thankfully I spied someone retweeting a blog post this summer from Rebecca Murphy about using Karma, webpack and istanbul to achieve exactly what we are trying to achieve.

Only using webpack.

Thankfully, the Karma ecosystem is complete enough that there’s a browserify plugin for it as well. And there’s a browserify transform for istanbul. And a karma plugin for gathering coverage information from istanbul. And a karma plugin for reading TAP output from a test suite. And a plugin for running your tests in Chrome — however it does not handle Xvfb for you if you need it.

So after an entire day of hacking around we finally have migrated to Karma. But again, an entire day wasted spent trying to piece together a ton of various moving parts into a cohesive system.

Our initial Karma config (mostly copied from Rebecca Murphy’s) was:

module.exports = function (config) {
config.set({
frameworks: ['tap', 'browserify'],
files: ['src/test/client/*.spec.js'],
reporters: ['dots'],
colors: true,
browserify: {
debug: true,
transform: ['envify', 'browserify-shim']
},
preprocessors: {
'src/test/client/*.spec.js': ['browserify']
}
})
})

But we’re not even close to being ready yet!

Getting shimmed non-commonJS files into the browser

The first hurdle was trying to figure out how to provide jQuery, jQuery UI, Slickgrid, etc into the test HTML page. This can be accomplished in Karma by adding entries to the file key of the configuration.

files: [
{pattern: 'src/static/js/vendor/jquery.js', watched: false, included: true, served: true, nocache: false}
]

However, most of the examples tell you this is how you include your tests into Karma. However, this is also how you include anything else you need to include via a <script> tag in the browser.

But! Those examples also tell you to do:

files: ['src/test/client/*.spec.js']

Which causes Karma to throw an obtuse error if you mix it with the object style above. The answer here is to make them all objects. Oh, and if you want a specific load order to the vendored files, you’d better include them in that order.

Thus, we end up with something rather ugly:

const files = pkg.vendor.scripts.map(function (scr) {
return {pattern: scr, watched: false, included: true, served: true, nocache: false}
})
files.push({pattern: ‘src/test/client/*.spec.js’}

And then just replacing the files entry we had in our initial config with the new files array we’re generating. FYI — we are reading the vendor scripts from the package.json file where they were for testling (only with a different name) since our custom build script needs the same list, so we want to only have to write it once.

(Another thing: I’d have loved to use the ES6 fat-arrow syntax here but since we’re returning an object, we have to use the return statement AND wrap it in a block, so might as well just use the word function and be on with it)

Adding a custom server

In testling, once we got it configured correctly, we were able to provide a server that it would stand up for us so we could test our socket.io integration. It’s a small amount of test coverage, but important since every thing on the client receives it’s information from the socket rather than an HTTP REST interface.

Testling was actually surprisingly easy here — once we had gotten around the whole “piping-into-testling-doesn’t-really-work-for-90%-of-your-problems” thing. We specified a server key pointing to a very simple echoing socket.io server

const http = require(‘http’)
const socket = require(‘socket.io’)
const app = http.createServer(() => {})
const io = socket(app)
io.on(‘connection’, function (socket) {
socket.on(‘message’, function (msg) {
if (msg.type === ‘disconnect’) {
return socket.disconnect()
}
socket.emit(‘message’, msg)
})
})
app.listen(parseInt(process.env.PORT, 10))

Giving that to browserify did everything you’d expect. It even was listening on the same port the rest of the tests came from!

In order to get this working in Karma, I ended up having to write a (very very simple) plugin to create a socket.io server when Karma starts up. Thankfully there are A TON of examples of creating new frameworks in Karma, so I was able to mostly copy the example from karma-websocket-server.

So now we’ve got a new configuration to add to our karma.conf.js

socketioServer: {
port: process.env.SOCKET_PORT,
onConnect: function (socket) {
socket.on(‘message’, function (msg) {
if (msg.type === ‘disconnect’) {
return socket.disconnect()
}
socket.emit(‘message’, msg)
})
}
}

Getting coverage

Now that our test run (albeit not headless and on my Mac rather in my headless Linux dev machine nor our CI environment), I figured I’d try to get coverage setup.

Initially I was disappointed that karma-browserify had a separate config from the main browserify config in package.json, but this is where it turns out to be a boon — I can enable the istanbul transform in only specific cases. (Coverage takes a while to process and to run so we don’t want to run it especially in production!)

So now we’ve got another reporter to add to the config, another block to add to configure that reporter (since we want lcov reports, the same as we get from node-tap on the server end), and a new item to add to the browserify config.

However, we don’t always want to run coverage — I would prefer to only run it in CI. Since we use Circle-CI, they provide a nice environment variable to determine if you’re in their environment. And thankfully, the karma config is an actual interpreted JS file, so we only add the coverage stuff if we’re in Circle-CI or if someone sets COVERAGE=true when running the tests

if (process.env.CIRCLE_CI || process.env.COVERAGE) {
config.browserify.transform.push([‘browserify-istanbul’, {‘ignore’: ‘**/3rdparty/**’}])
config.coverageReporter = {
type: ‘lcov’,
dir: ‘coverage/’
}
config.reporters.push(‘coverage’)
}

Running Headlessly

OK! I’ve got code-bundling, coverage instrumentation, script-fixtures and a stood-up server! One last hurdle — getting Chrome to run headlessly!

We are already using Xvfb under testling, so this is really an environment configuration issue. When the client tests run, we see if we’re in Circle-CI, and if not do some detection to see if Xvfb is already running. If not we start it. (Circle-CI automatically provides Xvfb on DISPLAY=:99.0 so we set ours up to be on the same port.)

This is done via the scripts section of our package.json

"scripts": {
"check-xvfb": "if [ \"CIRCLE_CI\" != \"true\"]; then test -e /tmp/.X99-lock || /usr/bin/Xvfb :99& fi",
"test-client": "npm run check-xvfb && DISPLAY=:99.0 karma start"
}

(Most of the other stuff removed for clarity)

And, so finally, our intrepid adventurers have a testing configuration that does what they need it to do, and that they can configure to their liking.

const pkg = require(‘./package’)module.exports = function (karma) {
const files = pkg.vendor.scripts.map(function (scr) {
return {pattern: scr, watched: false, included: true, served: true, nocache: false}
})
files.push({pattern: ‘src/test/client/*.spec.js’})
const config = {
frameworks: [‘tap’, ‘browserify’, ‘socketio-server’],
preprocessors: {
‘src/test/client/*.spec.js’: [‘browserify’]
},
socketioServer: {
port: process.env.SOCKET_PORT,
onConnect: function (socket) {
socket.on(‘message’, function (msg) {
if (msg.type === ‘disconnect’) {
return socket.disconnect()
}
socket.emit(‘message’, msg)
})
}
},
reporters: [‘dots’],
port: 9876,
colors: true,
logLevel: karma.LOG_DISABLE,
browsers: [‘Chrome’],
files: files,
singleRun: true,
browserify: {
debug: true,
transform: [‘envify’, ‘browserify-shim’]
}
}
if (process.env.CIRCLE_CI || process.env.COVERAGE) {
config.browserify.transform.push([‘browserify-istanbul’, {‘ignore’: ‘**/3rdparty/**’}])
config.coverageReporter = {
type: ‘lcov’,
dir: ‘coverage/’
}
config.reporters.push(‘coverage’)
}
karma.set(config)
}

But why do we have to jump through all of these hoops? This isn’t a unique situation here needing these few configuration set up (the only thing that strikes me as totally out of place is the socket.io server that we standup). Needing browserify, coverage and some fixtures seems pretty de rigur for testing, so why does this take so long to set up?

--

--

a pile of moss

the guy who hits the pause button during garbage collection