Automating your way to a higher standard

Programmers are lazy. That’s why we often automate tasks that we think are too cumbersome to do manually. You’ve probably, at one time, written a script to complete a task because you didn’t feel like doing a lot of repeated steps by hand, right? I have. A lot.

But automation isn’t just for the lazy programmer. It’s for the smart, stability-aware, quality-conscious programmer. Because you can automate things not only to lighten your workload, but also to make sure you have certain checks in place that help you with the ongoing development of your project.

An obvious example of this is unit tests. You write them yourself, but when you execute them, they test your code for you. There’s no need to manually check and test if your code is executing as expected; you’ve automated that. But: you still have to run the tests. That’s easy to forget, and that’s how small breakages can sneak into your code. So what you have done by writing tests is automating this testing process, but you haven’t gone all the way.

Another example is updating your test environment. You can automate that; there’s a myriad of technologies for automated deployment. At SIM, we use Ansible, so every project has a bit of configuration to run our mostly standardized Ansible playbooks to execute our deployment. Nice, it’s automated! But if you don’t run the deployment of your integration test environment every time there’s a merge to the master branch, the environment starts lagging behind. So you’ve automated it, but not all the way.

There are other things you can automate, but the key to all of it is to make sure your automated processes actually run, and run often. And guess what: you can automate that too. I’d like to walk you through the automated steps we have in all or most of our projects to not only make our work easier, but also quicker, more stable and more secure.

Local automation: make sure you never check in ‘bad code’

We have two kinds of software projects: applications that are mainly PHP, and ones that are built in Javascript. We do have combinations of the two, but for this article, we’re pretending it’s either-or.

In our PHP projects, we use tools to check our code against PSR-2 standards, run unit tests and perform some additional checks, like validating the commit message to see if it contains a reference to our issue tracker. The tool we use for this is GrumPHP. It runs PHP-CS-fixer to check the coding standards and of course PHPunit to execute the tests. What’s useful about GrumPHP is that it has all these built-in rules, supports various tools and embeds itself into your workflow. Once installed, GrumPHP will be executed on every commit, thanks to the power of commit hooks. This means that on every local commit, coding standards are checked and tests are executed. After that, it checks the commit message for some builtin rules (such as line length) and our issue-number-check. If everything passes, the commit is done. If anything fails, the commit fails. Unless you deliberately skip these checks, this is an easy way to make sure certain things simply can’t be forgotten.

In Javascript we do something similar. We’ve installed Husky, which is comparable to GrumpPHP in that it inserts itself into a commit hook and it can run whatever you want it to run. The configuration is a bit simpler; inside our package.json we've defined a precommit task and one for commitmsg and whatever we put in there, will be executed by Husky. For coding standards, we make sure stylelint and ESlint are executed. For our React/Javascript code, we've chosen to follow the style guide by AirBNB, so there's some configuration present for that.

"scripts": {
...
"precommit": "yarn lint && yarn test --coverage",
"commitmsg": "node scripts/validateCommitMessage.js",
"lint": "stylelint 'src/**/*.css' ; eslint --ext=js --ext=jsx src",
...
}

Our pre-commit comfiguration in package.json.

{
"parser": "babel-eslint",
"extends": "airbnb",
...
}

Some of the configuration inside our .eslintrc.

The unit tests are written to be executed by Jest, and these are executed after the style checks. What’s useful in Jest is that we can configure a threshold for code coverage. This means that whenever the code coverage drops below a configured percentage, the check fails and we can’t commit. How to fix that? Go back and write a test (or multiple) for the code we just wrote! It’s easy to never have failing tests if you don’t write them, but this check even tests that.

"jest": {
"coverageThreshold": {
"global": {
"statements": 80,
"branches": 80,
"functions": 80,
"lines": 80
}
},
"collectCoverageFrom": [
"src/**/*.{js,jsx}"
]
},

Our Jest coverage configuration in package.json.

The commit message is checked by a tiny script of our own making. The issue number is the most important rule; all others as checked by GrumPHP are just gravy. As there didn’t seem to be a useful commit message checker available, just a small script is what we went with.

CI automation: fire and forget

When the commit is approved, we know that we’ve written code that does not break any tests, adheres to coding standards and has a (mostly) useful commit message. After a series of commits, it’s time to push the branch to the Git server and that’s where some more magic happens.

Our Git server is a GitLab installation. It has a great interface for code management, code review and lots more interesting tools, but the most important ones are the Continuous Integration features. For any project in GitLab, it’s possible to define sets of tasks, called pipelines, that are executed on every push to a branch. We include Composer or Yarn builds, unit tests and coding standards checks in these tasks, which is all that we already run on our local machines.

In addition to that, we add extra tasks, like a check for our Composer dependencies, to see if any of them have security updates. And a task that runs our functional tests in a dedicated Docker image which contains a test runner and all its dependencies (in a previous post, you can read how we use Protractor and in an upcoming post, we’ll tell you a bit about Cypress).

Putting these tasks as jobs in our CI setup is useful; they are often tasks you don’t want to include in your pre-commit hooks, because that would make development a lot slower (it’s not comfortable to have to wait many minutes to finish each commit). But you still want to make sure that any breakage, or a security update, is noticed as soon as possible. Because we work with merge requests (which are the same as GitHub’s pull requests) and those pipelines run every time the code for a merge request is updated, it’s an automatic check that’s part of the code review process. GitLab won’t let us merge the branch in a merge requests when the pipeline fails, so we have to fix any breakage before being able to move on.

Of course, this can also be bypassed, but we include these tasks to help ourselves, not to annoy fellow developers.

Lastly, after we’ve done all the testing and the checking and the making sure that what we have is nothing less than tried and tested, quality software (for as far you can check for ‘quality’ in automation, of course), we roll out our code. Every master branch is deployed to a test environment on every push, which means that upon merge and passing checks, every master update means a test environment update. This is done by simply including a job in the GitLab CI config that runs the Ansible playbooks. We’ve configured the jobs that deploy to the test environment to run automatically on every master update, and have identical jobs for acceptance and production. These are however not executed automatically; we have to start them manually and can only do this when we’ve created a new tag in Git. This couples Git tags (which are version numbers) with release moments, allowing us to keep an administration of what went live at which moment.

We say that SIM likes to make things simple. It’s right there in our payoff. But that doesn’t just apply to our products and services; we believe that when you take away the complexity of everything that surrounds the development process, it becomes easier to focus on the development itself. And that benefits the quality of what we put in front of our customers. So the rule is that everything that can be automated, should be automated.

It’s important to note that we not only have all these automated checks in place, but actually haven’t invented any of this ourselves. We put it together based on existing conventions, tools and tips from helpful developers. We chose the checks we think are useful and will keep adding checks when we find additional ones.

Should you have any questions about how to set these things up for your own project, make sure to let us know. We’ll help you gladly. And of course, any useful additions to the above are always welcome!