Continuously delivering a multi-repository application for multi-architecture Kubernetes-clusters

Thomas Huffert
omi-uulm
Published in
5 min readJul 6, 2021
Photo by JOHN TOWNER on Unsplash

In a recent TypeScript-based project, we ran into a challenge. We had two different repositories, an application depending on an underlying framework. The application needed to be retested every time changes to the framework were applied to ensure the integrity of the update.

For local development, a structure like this is not a problem at all. In the case of TypeScript, npm for example allows linking the two repositories by first running npm link in the framework repository, after which it can be incorporated as local dependency in the application via npm link framework. This works perfectly fine if testing the repositories locally is sufficient. Other languages or runtime-environments allow for similar solutions.

However, in our case, it was not quite as easy. Our application is situated in the domain of the Internet of Things (IoT), targeting multiple different architectures and utilizing device and platform specific communication technologies. It therefore could not be fully tested locally, but needed to be deployed on a testbed — for us, in form of a local Kubernetes-Cluster — to ensure that everything is working correctly. Consequently, the application had to be containerized and made available first.

In the following article, we describe how we built an applicable continous delivery solution via GitLab ci-pipelines, utilizing intermediate images in an Open Container Initiative (OCI) container registry.

The starting point

First, lets describe the starting-point in form of the basic pipeline we used for a single-repository project. We use a self-hosted GitLab to host our repositories.

The goal was to build multi-architecture images that work on linux-based amd64, arm64 and arm/v7 systems. This is the basic gitlab-ci.yml we used for a multi-architecture TypeScript-project:

npm

In the first stage, the npm dependencies are resolved and the package is built In our case, we utilize pnpm for the dependencies, an alternative, faster package-manager for nodejs. It gives us benefits down the line, which we utilize in the containerize step.

Both dependencies and build-output are stored as artifacts for usage in later stages.

eslint

In the second stage, the source code is analyzed using ESLint to check for problems in the code, for example syntax errors.

We also use the prettier-plugin for ESLint to ensure a consistent code style and format.

test

In the third stage, we execute tests using jest to check for easy to catch errors in the code.

containerize

In the last stage, it gets interesting. We containerize our application using the jess/img-image, a Dockerfile-based container build-tool built based on BuildKit. The produced images are uploaded to a local container registry.

Within jess/img, the image of for every architecture is build separately. In consequence, a many partly redundant operations are done in the building process. To allow for shorter build-times, we therefore had made several optimizations to our base Dockerfile:

Mainly, we did three things:

  • Use the smallest applicable base-image (for us, alpine worked)
  • Use compiled TypeScript code from build artifacts
  • Use pnpm for installation of dependencies instead of npm

All in all, these optimizations drove down the build time from around 30 minutes to below 5. This saved us some headache, but cost us a few coffee-pauses — well, somewhere priorities have to be set.

The realization

Now, let’s start incorporating our scenario. We have two different packages, one being the application, one being the framework. Both are managed in different repositories, however the former depends on the latter. This means that build-artifacts need to be shared between different pipelines, since every repository has it’s own build pipeline.

In GitLab, there are multiple ways to do this. One would be to use a GitLab Package Registry to upload the compiled package to a local registry, another to retrieve artifacts of another pipeline via CI_JOB_TOKENS. These solutions however are dependent on GitLab (and the latter only works with GitLab premium). We instead opted to use our self-hosted OCI container registry, inspired by docker multistage builds.

Schematic structure of project pipelines

In the framework-pipeline, we regularly create an image based on alpine — base— as we would if we only had a single-repository project. However, we create an additional image, base-build, which is intended as build-image for later steps. Both of those images now contain the build artifacts — aka the the compiled framework — for later usage in the application-pipeline

In the application-pipeline, we use the aforemention base-build image to produce our build artifacts. Subsequently, the image base is used as base-image to produce the image app. The app image can then be deployed on the Kubernetes-Cluster.

Framework-pipeline

We only slightly have to adjust our basic build-pipelines to create the framework-pipeline:

framework gitlab-ci.yml

Most notably, we create an additional image — base-build— with a slightly different Dockerfile:

framework BuildDockerfile

It uses another buster-based base-image instead, which includes necessary os-dependencies for compiling TypeScript projects. Additionally, dependencies are not installed, since that is done as part of the application pipeline — this makes the image smaller.

The regular Dockerfile looks almost the same:

framework Dockerfile

We just do not install the production dependencies, again to make the image smaller.

Both dockerfiles do not include an entrypoint, since they are not meant to be deployed alone.

Application-pipeline

We modify the basic-pipeline to incorporate the images created earlier.

The application pipeline looks like this:

application gitlab-ci.yml

We use the base-build image to create our build-artifacts. The included framework package is linked in the npm stage.

Similarly, we modify our basic Dockerfile to use the base image and link the framework as well:

With this, were done.

Final thoughts

With this, we created pipelines, spanning across multiple repositories, that manage inter-pipeline build artifacts using a OCI container repository and create multi-architecture images for usage in an Kubernetes-Cluster. Although it is focussed on nodejs-based TypeScript project, this method could be used with as other languages or runtime environments as well, since it basically just uses the intermediate-images as containers to share the project-artifacts. Also, since no GitLab exclusive features were used, it would work with other hosting-services like GitHub as well (albeit with conversion of the gitlab-ci.yml -files).

One thing that still can be improved uppon is the handling of job-triggers. Currently, a job for the application-pipeline needs to be triggered manually after the framework’s is done. This could be done automatically with GitLab pipeline-triggers.

--

--