Trying new tools for building and automating the deployment in Kubernetes

kvaps
kvaps
Jan 15 · 17 min read
Photo by Christopher Gower on Unsplash

Hi!
Recently, many cool automation tools have been released both for building Docker images and for deploying to Kubernetes. In this regard, I decided to play with the Gitlab a little, study its capabilities and, of course, configure the pipeline.

The source of inspiration for this work was the site kubernetes.io, which is automatically generated from source code.
For each new pullrequest the bot generates a preview version with your changes automatically and provides a link for review.

I tried to build a similar process from scratch, but entirely built on Gitlab CI and free tools that I used to use to deploy applications in Kubernetes. Today, I finally will tell you more about them.

The article will consider such tools as: Hugo, qbec, kaniko, git-crypt and GitLab CI with dynamic environments feature.



1. Getting started with Hugo

As an example of our project, we will try to create a website for publishing documentation built on Hugo. Hugo is a static site generator.

For those who are not familiar with static generators, I will tell you a little about them. Unlike regular site engines with database and some PHP which generate pages on the fly at the user’s request, static generators are working a little different.
They allow to take the source text, let’s say set of files in Markdown markup and theme templates, and then compile them into a fully finished site.

That is, at the output you will get a directory structure and a set of generated HTML files that can be simply uploaded to any cheap hosting and get a working site.

Hugo can be installed locally and try it out:

Initialize the new site:

hugo new site docs.example.org

And also git-repository:

cd docs.example.org
git init

Right now our site is empty and if we want to make something appear on it first and foremost we need to connect a theme. Theme it is just a set of templates and preset rules for generate our site.

We will use Learn theme, which, in my opinion, is the best suited for a site with documentation.

Pay attention we don’t need to save theme files in our repository, instead we can simply connect it using git submodule:

git submodule add https://github.com/matcornic/hugo-theme-learn themes/learn

Thus, in our repository will be located only files directly related to our project, and nothing else. The connected theme will be just a link to specific repository and commit hash, so, it can always be pulled from the original source and without fear of incompatible changes.

Edit config config.toml:

Now we can go http://localhost:1313/ and check our newly created site. All changes made in the directory will automatically update the page in the browser, it is very convenient!

Let’s try to create a title page content/_index.md:

# My docs site## Welcome to the docs!You will be very smart :-)
Screenshot of created page

To generate a site, just run:

The contents of the directory public/ is your site.

By the way, let’s add it to .gitignore:

echo /public > .gitignore

Do not forget to commit our changes:

git add .
git commit -m "New site created"

2. Dockerfile preparation

It is time to determine the structure of our repository. Usually I use something like:

.
├── deploy
│ ├── app1
│ └── app2
└── dockerfiles
├── image1
└── image2
  • dockerfiles/ — contain directories with Dockerfiles and everything need to build our docker images.
  • deploy/ — contains directories for deploying our applications to Kubernetes

Thus, we will create our first Dockerfile along the path dockerfiles/website/Dockerfile

As you can see, the Dockerfile contains two FROM, this opportunity is called multi-stage build and allows you to exclude everything unnecessary from the final docker image. Thus, the final image will contain only content of our staticly generated site and darkhttpd (lightweight HTTP-server).

Do not forget to commit our changes:

git add dockerfiles/website
git commit -m "Add Dockerfile for website"

3. Getting started with kaniko

I decided to use kaniko to build docker images, since it does not require running docker daemon. The build can be done on any host, the layers can be cached directly in docker-registry, getting rid of the need to have a full persistent storage.

To build the image, just start the container with kaniko executor and pass the current build context to it, you can do this locally, using docker:

docker run -ti --rm \
-v $PWD:/workspace \
-v ~/.docker/config.json:/kaniko/.docker/config.json:ro \
gcr.io/kaniko-project/executor:v0.15.0 \
--cache \
--dockerfile=dockerfiles/website/Dockerfile \
--destination=registry.gitlab.com/kvaps/docs.example.org/website:v0.0.1

Where registry.gitlab.com/kvaps/docs.example.org/website is the name of your docker image, after the build it will be automatically pushed to the docker registry.

Option --cache allows to cache the layers in docker registry, for the given example it will be saved in registry.gitlab.com/kvaps/docs.example.org/website/cache, but you can specify another using option --cache-repo.

Screenshot of docker-registry

4. Getting started with qbec

Qbec is a deployment tool that allows you to declaratively describe the manifests of your application and deploy them to Kubernetes. Using Jsonnet as the main syntax allow to simplify the description of differences for several environments, and also almost completely eliminates code repeatability.

This can be really useful in cases where you need to deploy an application into several clusters with different parameters and you want to declaratively describe them in Git.

Qbec also allows you to render Helm charts by passing the necessary parameters to them and then operate them the same way as the usual manifestos. It allows you to add some mutations for them, and eliminates the need of using ChartMuseum. This way you can store and render charts directly from git, where they have the very place.

As I said before, we will store all deployments in a directory deploy/:

mkdir deploy
cd deploy

Let’s initialize our first application:

qbec init website
cd website

Now the structure of our application looks like this:

.
├── components
│ ├── gitlab-runner.jsonnet
├── environments
│ ├── base.libsonnet
│ └── default.libsonnet
├── params.libsonnet
├── qbec.yaml
├── secrets
│ └── base.libsonnet
└── vendor
└── gitlab-runner (submodule)

look at the file qbec.yaml:

Here we are primarily interested in spec.environment, qbec has already created a default environment and took our namespace and server address from our current kubeconfig.
Now, when using default environment, qbec will always deploy to the specified Kubernetes cluster and the namespace. This way, you no longer need to switch namespace and context before applying configuration.
You can always update the settings in this file, if necessary.

All your environments should be described in qbec.yaml, and in the params.libsonnet file, which is contain the information where to take parameters for them.

Next we see two directories:

  • components/ — all the manifests for our application will be stored here, we can describe them both using jsonnet and as ordinary yaml files
  • environments/ — here we will describe all the variables (parameters) for our environments.

By default, we have two files:

  • environments/base.libsonnet — contains general parameters for all environments
  • environments/default.libsonnet — contains parameter overrides for default environment

Let’s open environments/base.libsonnet and add the parameters for our first component there:

Create also our first component components/website.jsonnet:

In this file we described three Kubernetes entities, these are: Deployment, Service and Ingress. We could separate them into different components, but at this stage, one is enough for us.

The syntax jsonnet is very similar to regular json. In principle regular json is already valid jsonnet, so at first it might be easier for you to use some online services like yaml2json to convert your usual yaml manifests to json format, or if your components do not contain any variables, they can be completely placed as usual yaml file.

I highly recommend you to install a plugin for your editor for working with jsonnet.

For example, there is a nice plugin vim-jsonnet for vim, that turns on syntax highlighting and automatically runs jsonnet fmt during each save (it requires installed jsonnet binary).

Everything is ready to start the deployment:

To see what exactly will be applied, just run:

qbec show default

In the output, you will see rendered yaml-manifests that will be applied to the default cluster.

Ok, now apply:

qbec apply default

In the output, you will always see what changes will be done in your cluster, qbec will ask you to accept the changes. By typing y you can confirm with that.

Done, now our application is deployed!

After any change in description, you can always do:

qbec diff default

to see how these changes will affect the current deployment

Do not forget to commit our changes:

cd ../..
git add deploy/website
git commit -m "Add deploy for website"

5. Trying Gitlab-runner with Kubernetes-executor

Until recently, I used only the usual gitlab-runner on a prepared machine (LXC-container) with the shell- or docker-executor.
From the begining we had several of these runners defined globally in our Gitlab. They were building docker images for all our projects.

But as practice has shown, this case is not so ideal, in terms of both practicality and security. It is much better and ideologically more correct to have separate runners deployed per each project, or even per each environment.

Fortunately, this is not a problem at all, since now we will deploy gitlab-runner directly as part of our application directly to Kubernetes.

Gitlab provides a helm-chart ready for deploying gitlab-runner in Kubernetes. Thus, all you need to do is find out the registration token for our project in Settings → CI / CD → Runners and pass it to Helm:

helm repo add gitlab https://charts.gitlab.iohelm install gitlab-runner \
--set gitlabUrl=https://gitlab.com \
--set runnerRegistrationToken=yga8y-jdCusVDn_t4Wxc \
--set rbac.create=true \
gitlab/gitlab-runner

Where:

  • https://gitlab.com — is address of your Gitlab-server.
  • yga8y-jdCusVDn_t4Wxc — is registration token for your project.
  • rbac.create=true — provides all the necessary privileges to the runner for make it possible to create new pods and perform our jobs using the Kubernetes-executor.

If everything is done correctly, you should see the registered runner in the Runners section in the settings page of your project.

Screenshot of added runner

Is it that simple? — — yes, so simple! No more hassle with manual runners registration, since now all the runners will be created and destroyed automatically.


6. Deploying Helm-charts with qbec

Since we decided to consider gitlab-runner as part of our project, it is time to describe it in our Git-repository.

We could describe it as a separate component of website, but in the future we plan to deploy different copies of website very often, unlike gitlab-runner, which will be deployed only once per each Kubernetes cluster. So let's initialize a separate application for it:

cd deploy
qbec init gitlab-runner
cd gitlab-runner

This time we will not describe Kubernetes entities manually, but take a ready Helm chart. One of the qbec’s advantages is the ability to render Helm charts directly from the Git-repository.

Let’s connect it using git submodule:

git submodule add https://gitlab.com/gitlab-org/charts/gitlab-runner vendor/gitlab-runner

Now the vendor/gitlab-runner directory contains link to repository with a chart for gitlab-runner.

Similar way, you can connect and other repositories, for example, the whole repository with official charts https://github.com/helm/charts

Let’s describe component components/gitlab-runner.jsonnet:

The first argument to expandHelmTemplate is path to the chart, then params.values which we take from the environment parameters, then an object with

  • nameTemplate — release name
  • namespace — namespace passing to the Helm
  • thisFile — required parameter passing the path to the current file
  • verbose — will show helm template command with all its arguments when rendering a chart

Now let’s describe the parameters for our component in environments/base.libsonnet:

Pay attention we are taking runnerRegistrationToken from the external file secrets/base.libsonnet, let's create it:

Check if everything works:

qbec show default

if everything fine, then we can remove our early deployed Helm-release:

helm uninstall gitlab-runner

and deploy it again, but using qbec:

qbec apply default

7. Getting started with git-crypt

Git-crypt is a tool that allows you to configure transparent encryption for your repository.

At the moment, the structure of our directory for gitlab-runner looks like this:

.
├── components
│ ├── gitlab-runner.jsonnet
├── environments
│ ├── base.libsonnet
│ └── default.libsonnet
├── params.libsonnet
├── qbec.yaml
├── secrets
│ └── base.libsonnet
└── vendor
└── gitlab-runner (submodule)

But save secrets in Git is not safe, is it? So we need to encrypt them by proper way.

Usually it does not making much sense just for the single variable, as you can pass the secrets to qbec using environment variables of your CI-system.

But pay attention that there might be more complex projects which might contain much more secrets. It will be extremely difficult to pass all of them using environment variables.

Also in this case I would not be able to tell you about such a wonderful tool as git-crypt.

Git-crypt is also quite convenient because it allows you to save the whole history of secrets, as well as compare, merge and resolve conflicts in the same way as when using standard Git.

The first step after installing git-crypt we need to generate the keys for our repository:

git crypt init

If you have a PGP-key, then you can immediately add yourself as a collaborator for this project:

git-crypt add-gpg-user kvapss@gmail.com

Thus, you can always decrypt this repository using your private key.

If you don’t have a PGP-key and do not plan to have it, then you can go the other way and export the project key:

git crypt export-key /path/to/keyfile

That way, anyone who having the exported keyfile could decrypt your repository.

It is time to configure our first secret.
Remember that we are still in the directory deploy/gitlab-runner/, where we have the directory secrets/, let's encrypt all the files inside it. To achieve this we should create the file secrets/.gitattributes with the following content:

As can be seen from the content, all files mask * will be run through git-crypt, with the exception of .gitattributes itself

We can verify this by executing:

git crypt status -e

In the output, we see a list of all files in the repository for which encryption is enabled

That’s all, now we can bravely commit our changes:

cd ../..
git add .
git commit -m "Add deploy for gitlab-runner"

To lock the repository, just do:

git crypt lock

and all encrypted files will turn into a binary objects, it’s will be impossible to read them. To decrypt a repository, do:

git crypt unlock

8. Preparing toolbox image

A toolbox image is such an image with all the tools needed to perform deploy operationg in our project. It will be used by the gitlab-runner to perform typical deployment tasks.

Everything is simple here, create a new dockerfiles/toolbox/Dockerfile with the following content:

As you can see, this image contain all the tools we used to deploy our application. We don’t need only kubectl here, but you might want to play with it at the pipeline setup stage.

Also, in order to be able to communicate with Kubernetes and perform a deployment operations, we need to configure the role for the pods created by gitlab-runner.

To do this, go to the directory with gitlab-runner:

cd deploy/gitlab-runner

and add new component components/rbac.jsonnet:

We will also describe the new parameters in environments/base.libsonnet, which now will looks like:

Note $.components.rbac.name refers to name for component rbac

Let’s check what has changed:

qbec diff default

and apply our changes to Kubernetes:

qbec apply default

Also, don’t forget to commit our changes to Git:

cd ../..
git add dockerfiles/toolbox
git commit -m "Add Dockerfile for toolbox"
git add deploy/gitlab-runner
git commit -m "Configure gitlab-runner to use toolbox"

9. Our first pipeline and building images using tags

In the project’s root we will create .gitlab-ci.yml with the following content:

Note that we use GIT_SUBMODULE_STRATEGY: normal for those jobs where you need to explicitly initialize the submodules before execution.

Do not forget to commit our changes:

git add .gitlab-ci.yml
git commit -m "Automate docker build"

I think we are brave enough to call it version v0.0.1 and add a tag:

git tag v0.0.1

We will use tags whenever we need to release a new version. Tags in Docker images will be sticked to Git tags. Each push with a new tag will initialize image building with this tag.

Run git push --tags, and take a look at our first pipeline:

Pay attention that using tags is suitable for builing docker images, but it is not suitable for deploying an application in Kubernetes. Since new tags can be added to old commits, the pipeline for them will initiate the deployment process for the old version.

To solve this problem, usually the docker images building is connected to tags, and the application deployment to the master branch where images versions are hardcoded in the configuration. It is in this case that you will able to rollback by initializing a simple revert of master branch.


10. Deployment automation

To allow Gitlab-runner to decrypt our secrets, we need to export the repository key and add it to our CI environment variables:

git crypt export-key /tmp/docs-repo.key
base64 -w0 /tmp/docs-repo.key; echo

the output string should be saved in Gitlab, let’s go to the settings of our project: Settings → CI / CD → Variables

And create a new variable:

  • Type: File
  • Key: GITCRYPT_KEY
  • Value: <your string>
  • Protected: true (for the training can be false)
  • Masked: true
  • Scope: All environments

Now update our .gitlab-ci.yml adding to it:

Here we have used several new options for qbec:

  • --root some/app — allows to define the directory with the application
  • --force:k8s-context __incluster__ — this is a magic variable that says that forces deployment to the same cluster where gtilab-runner is running. This must be done, because otherwise qbec will try to seek a suitable Kubernetes server in the kubeconfig
  • --wait — makes qbec wait until the created resources become to Ready state then exit with a successful exit-code.
  • --yes — disables the interactive shell Are you sure? during deployment

Do not forget to commit our changes:

git add .gitlab-ci.yml
git commit -m "Automate deploy"

And after the git push we will see how our applications were deployed:

Screenshot of second pipeline

11. Artifacts and building on push to master

Usually the above steps are enough to build and deliver almost any microservice, but we don’t want to add a tag every time we need to update the site.
Therefore, we will go by the more dynamic way and configure the digest based deployment direct in the master branch.

The idea is simple: now the image of our website will rebuild each time you push to master, and after that it automatically deploy to Kubernetes.

Let’s update these two jobs in our .gitlab-ci.yml:

Note that we added our master branch to refs for the build_website job, also now we're using $CI_COMMIT_REF_NAME instead of $CI_COMMIT_TAG. This way we are stopping using Git tags for the docker images and now they will be created with the commit branch name for each pipeline. It also will work with the tags, which will allow us to save snapshots of specific site versions in docker-registry.

Option --vm:ext-str digest="$DIGEST" for qbec — allows you to pass an external variable to jsonnet.

Since we want to apply every version of our application to the cluster, we can’t use the tag names anymore, becaus they will be unchanged. We need to specify exact image version for the every deployment operation to trigger rolling-update when it changes.

Here, we will use the ability of Kaniko to save the digest of the image to a file (option --digest-file)
Then we will passtrough this file and read it at the deployment stage.

Let’s update the parameters for our deploy/website/environments/base.libsonnet which will now look like this:

Done, now any commit to master will trigger the docker image builing for website, and then deploy it to Kubernetes.

Do not forget to commit our changes:

Let’s check, after git push we should see something like this:

Screenshot of pipeline for master

We do not need to redeploy the gitlab-runner every time, unless, of course, nothing has changed in its configuration, so let’s fix it in .gitlab-ci.yml:

changes allows you to monitor changes in deploy/gitlab-runner/ and trigger job only in this case:

Do not forget to commit our changes:

git add .
git commit -m "Configure dynamic build"

git push, that's better:

Screenshot of updated pipeline

12. Dynamic environments

It is time to diversify our pipeline with dynamic environments.

First, update the build_website job in our .gitlab-ci.yml, removing the only block from it, which will force Gitlab to trigger it on any commit in any branch:

Then update the job deploy_website, add the environment block there:

This will allow Gitlab to associate the job with the prod environment and display the correct link to it.

Now add two more jobs:

They will run on push to any branch except master and will deploy a preview version of the site.

We see a new option for qbec: --app-tag — it allows you to add specific tag for the deployed versions of the application and work only within context of this tag.
Thus, we can do not create a separate environment for each review, but simply reuse the same one.

Here we also used qbec apply review, instead of qbec apply default — this is how exactly we describe the differences for our environments (review and default):

Add review environment to deploy/website/qbec.yaml

Then declare it in deploy/website/params.libsonnet:

And write the custom parameters for it in deploy/website/environments/review.libsonnet:

Let’s also take a closer look at the stop_review job, it will be triggered when the branch is removed. To force Gitlab not try to checkout on it, we use GIT_STRATEGY: none, later we clone the master branch and using it to delete the review version deployment.
This is little ugly, but I have not yet found more beautiful way.
An alternative way would be to deploy each review version to separated namespace, and then remove whole namespace with the application.

Do not forget to commit our changes:

git add .
git commit -m "Enable automatic review"

git push, git checkout -b test, git push origin test, and check this:

Screenshot of created environments in Gitlab

Everything works? — — excellent, delete our test branch: git checkout master, git push origin :test, check that the environment removal finished without errors.

Here I want to clarify right away that any developer able to create branches in the project can also change .gitlab-ci.yml file in this branch and gain access to secret variables.
Therefore, it is strongly recommended you to allow their use only for protected branches, for example in master, or provide a separated set of variables for each environment.


13. Review Apps

Review Apps is such feature that allows you to add a button for each file in the repository to quickly view it in deployed environment.
To these buttons to appear, you need to create a file.gitlab/route-map.yml and describe all the path transformations in it, in our case it will be very simple:

Do not forget to commit our changes:

git add .gitlab/
git commit -m "Enable review apps"

git push, and check:

Screenshot of Review App button

Job is done!

Sources of this work:

Thank you for your attention, I hope you enjoyed 😉

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade