Continuous Verification

Using Node.js and mock-fs

Rob Levin
Kantata Product Development
6 min readApr 3, 2019

--

Illustration by Rob Levin

Note that in this article we will attempt to talk about some of the high level concerns with introducing verification scripts into a continuous integration process. While some technical samples will be given, we won’t be providing every single detail involved (like how to set up a continuous integration system from scratch, use package managers, etc., and some prior knowledge is assumed.)

Automated Outreach

We’ve been starting to introduce linting and verification scripts to our continuous integration process at Mavenlink (credit to my colleague Juanca who was first one to introduce these). These scripts help us to not only enforce code style—something linters have been doing for ages—but, also, nudge our whole team towards meeting certain coding guidelines; automated outreach if you will.

We’ve found that while lunch and learns, RFCs, training guides, peer review, pairing, and the like can all help to move the team’s culture towards adopting certain best practices, by themselves they are not enough. Best practices need to be integrated right into your continuous integration system so that folks take notice in the flow of their normal working process, hopefully enticing them to embrace the coding guideline in question.

Empathy

It can come off as a bit Machiavellian to fail someone’s GitHub pull request because they didn’t follow practices espoused by some “all-knowing” Code Gatekeeper. We need to have empathy and realize that this developer’s failed build might mean they have to go tell their manager that feature X is going to take a bit longer to deliver then promised…obviously no fun!

We should do all we can to guide them in the right direction with meaningful error messages, and preferably, a link to any relevant documentation that can help guide them towards resolving the matter quickly:

This error gets reported by the verification script when a developer has placed an SVG icon on the page without following our coding guidelines for doing so.

This developer who encounters it this message benefits by being shown:

  • how to run the verification script from their local development environment
  • where to go to get more information on SVG usage guidelines

Such console output is useful to new developers being onboarded and forgetful old hands alike.

Verification

As mentioned, we use Node.js scripts to do the verification. These usually involve the following:

  • the ability to both be required as a module and ran from the command line
  • globbing in files we care about verifying
  • One or more regular expressions to run against the contents of above files
  • Matches against a regex might signify a violation. Those are collected, and then output to the console
  • Non-zero exit codes are output for 1 or more said violations. In our case, this signifies to our CI system, that a violation occurred, and essentially fails the corresponding pull request (discussed further down in the article)

A Taste of the Details

Without diving into all details of how these verification scripts work, we can examine one routine which takes a list of server side partials and looks for any calls to an svg_icon helper with a hard-coded icon name. In our practices, this is a violation as we prefer to use the Webpack loader svg-sprite-loader—this allows us to import the svg dependency right from within the same file as used and also, not hard code the SVG icon name which requires a global dependency on the icon actually being pushed into the page’s SVG sprite.

This method takes a list of server files, reads in their contents one by one, and then matches on a regex that indicates aforementioned violation violation. Finally, it returns list of these SVG icon misuses.

Obviously, there’s more to these verification scripts, but you can imagine that this list of “violations” returned by getAllSVGUsageInSSR gets used to eventually return a non-zero exit code.

Testing

When we first introduced these verification scripts, we excitedly hooked them up to our continuous integration system, writing them in a very typical procedural manner of shell and Node.js command line scripts.

It didn’t take too long to realize that the quality of code in these scripts, in terms of peer review and test coverage, was not up to snuff with the rest of our application coding standards (we typically ensure there is spec coverage for submitted code). Bugs ensued 😞.

One reason no tests were introduced, was that it can be baffling how to replicate a file tree that these scripts are globbing in—mock-fs to the rescue!

These specs generally involve the following:

  • mock-fs to ease stubbing out file system
  • the corresponding implementation script (a.k.a. system under test or SUT)

Here’s an example of setting up spec that leverages mock-fs:

In the above, we’ve created a sort of skeleton file system that’s the bare minimum for use to have in place and run the verification script.

While I believe the use of mock is fairly self evident, there are a few things of interest here:

  • frontend/erb-svg.js is a file our SUT writes to. It starts blank, but we can call certain methods against our implementation, and then assert against certain contents being written
  • app/views/*.html.erb files are the inputs for our SUT
  • bar.html.erb purposely has two svg_icon calls which we use to verify the tricky use in JavaScript of multiple global captured groups.

Unit Tests

Our unit tests are generally quite simple. For example:

Given the skeleton file tree we’ve created in the earlier example using mock-js, it makes sense that the scraper has found 2 files: app/views/foo.html.erb and app/views/bar.html.erb respectively.

Integration Tests

Sometimes an integration test is introduced (not always), as the scripts being verified are generally fairly small. Here’s an example:

The returned value of 3 from the call to verifyServerSideSVGs, represents the total number of violations found. This will in turn be used by the script to call Node’s process.exit with a non-zero exit code, in turn, causing our continuous integration system of choice CircleCI to report a failure on GitHub:

CircleCI showing failing on our github pull request

Continuous Integration

The lovely github pull request failure you see above (asset_linters) happens because our script is “hooked up” from our .circle/config.yml as a run step to our asset_linters job:

Where the command defined in our package.json scripts section is the actual command line invocation of our verification script:

While the CircleCI specifics above may not apply to your continuous tools of choice, the general idea of exiting with a non-zero exit code can most likely be universally applied to whatever CI system you’ve chosen to use.

Conclusion

This article has been purposely surface level for the sake of brevity, but has hopefully provided you with an overview of things to consider in regards to using verification scripts. These scripts can really help you to shepherd your team towards adopting certain coding guidelines in an automated fashion. Standing on the shoulders of mock-fs we’ve found this approach to be quite effective and easy to implement.

--

--