CI tools do not continuously integrate and what to do about it

Markus Hanslik
Apr 29 · 4 min read
Photo by Hans-Peter Gauster on Unsplash

Imagine this: It’s the year 2019. We have self-driving cars (or at least great marketing). Coding is becoming a regular part of curriculums around the globe. Even business managers know that agile is king. You can spin up your start-up in the cloud, with nothing but code; and writing good code has become easier ever since thanks to great frameworks, communities like StackOverflow, thousands of books, and an ever-growing number of open source libraries doing grunt work for us.

But still, we are all human, or in other words, sometimes sloppy, unfocused, inexperienced, unaware, and at times simply do not care enough about our work.

Continuous Integration tools are not test-centric

Luckily, there is Continuous Integration, definitely one of the most helpful tools in a developer’s (and operations manager’s!) toolkit. Finally, you can just commit your change and wait for a computer to tell you whether you did a good or not-so-good job.

Or can you? It’s the year 2019, but all of the modern, pipeline/file-based CI tools do not do much else than just running your scripts. (And, not versioning your pipeline in your repository should not be a thing, so we won’t talk about other tools).

Their UIs basically are just lists with console output and nothing else. Did you spend time configuring jUnit outputs for jest, or any of the xUnit tools for any language? Did you use proper scenario-based language in your tests, possibly BDD, and maybe even marked your tests with tags? Well, good luck seeing that in their UIs. (Spoiler: you can’t.)

GitLab CI, Bitbucket Pipelines, CircleCI, Semaphore, AWS CodePipeline, … if you are lucky, they will show you the number of failed tests in a build. Most of them will not show you any statistics, let alone statistics that may be useful like who is (not-red-green) committing broken stuff often or which code changes often break similar unit tests (do I hear somebody saying „machine learning start-up idea“?)

And not just that, even for just showing the number of failed tests you need to jump through hoops with all of the services that will give you the feeling they have not been developed for actually doing CI in the first place.

Try setting up the export of a jUnit test from within a docker container; most services have restrictions like not being able to mount volumes, or not being able to use copy-from, and having to place stuff into a specific folder or configuration… If a CI vendor then does not document how to even get the unit test results from Docker to their UI, there should be a law immediately banning „docker-first“ and „CI“ from their marketing.

We should define, here and now: You only do CI if you can immediately see tests that fail, that fail often, get and see your linter’s output to also identify technical debt, and have a bird’s eye view on all of that so that you can actually learn from past mistakes.

Continuous Integration tools are not integrating

Unfortunately, it’s not just that they are not helpful in finding issues inside of a new commit; they are also not particularly helpful in, well, continuously integrating.

The smaller the services get, the more important is testing the integration between services and making sure the service as well as the contracts work properly.

Here, it’s even worse — the pipeline-based CI tools are in their very early stages here too and do not offer much help when testing the integration of multiple services, changes across services, re-building / re-testing services if a dependency has changed and so on.

How to mitigate those issues?

However, luckily, there are some ways for you to make more of your CI even now:

  1. To have your CI actually do integration in a very simple way, consider using a mono-repository for your platform’s services. Especially if you are migrating from a monolith to smaller services, or a big service to smaller services, consider doing this in a single (aka mono-) repository so that the CI will help you to actually run all tests on changes across affected services with the added bonus of more insightful pull requests; if the tests are then getting too slow, it’s easier to configure the CI script to run parts of the tests only (e.g. based on git diff folder names) rather than trying to check out many repositories etc. This helps even if you just have one backend and one frontend service.
  2. As long as there is no help from the CI tools, make sure your tslint, jest, phpunit etc outputs are configured as browsable artifacts (e.g. GitLab Pages) and make sure your CI fails if your tests fail.
  3. Add services like greenkeeper.io (or renovate, etc) to your pipeline, so that your dependency upgrades are properly checked in a separate branch and can easily be tested and merged into the master branch, without mixing multiple changes at once.
  4. Set up a scheduled pipeline (available in Bitbucket, GitLab, etc) running your (e.g. npm) dependency security and license check at least on your master branch; and ideally make those checks part of each feature etc branch’s run too.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade