KAS: the GitLab Kubernetes Agent

Matteo Codogno
WellD Tech
Published in
4 min readJun 28, 2022
Photo by Timelab Pro on Unsplash

GitLabOps

In this blog post series, I will show how it is possible to use GitLab to implement GitOps’ best practices. In this first article, we will focus on how to migrate from GitLab Kubernetes integration based on K8s API and cluster certificates to GitLab Kubernetes Agent Server.

This step becomes even more crucial as cluster certificate integration has been DEPRECATED with GitLab version 14.5.

Starting point

At WellD most of our software is deployed on K8s. We have a job template that leverages bitnami/kubectl docker image to apply K8s descriptors or kustomize files against the K8s cluster. Authentication to the K8s cluster is performed via KUBE_URL and KUBECONFIG environment variables, that are automatically exposed.

k8s_deploy:
stage: deploy
tags: [welld]
image: bitnami/kubectl:latest
retry: 2
interruptible: true
cache: {} # In this job we do not need cache.
artifacts:
paths: [environment_url.txt]
script:
- upgrade_project_version || true
- install_dependencies
- persist_environment_url
- deploy
rules:
- if: $KUBE_DEPLOY_DISABLED
when: never
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
when: manual
allow_failure: true
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
- if: $CI_COMMIT_TAG

We maintain a bash script that loads as default before_script for all jobs. This bash script contains all functions that we need in our CI/CD jobs.

Security issues

Why did GitLab deprecate this kind of integration with Kubernetes?

Direct integration with K8s API and cluster certificate has some security issues. First of all, anyone who can create a deployment job has direct access to the Kubernetes API. Furthermore, as most of the features within the integration require elevated privileges, we had to give cluster-admin rights to GitLab.

GitOps

The core idea of GitOps is having all infrastructure descriptors (IaC or Infrastructure as code) versioned in a git repository, aka Single Source Of Truth, and a process that maintains the infrastructure updated to the last commit.

The traditional way to deploy a new piece of infrastructure was called pushed-based deployments. In this deployment strategy, the infrastructure update is triggered by a codebase update and executed by the build pipeline. When new commits are pushed, the triggered pipeline will build artifacts (JAR, tarball, etc..), and container images, and eventually apply infrastructure descriptors. With this approach, our pipeline must have the credentials to update the target environment. An additional drawback of this deployment strategy is that the pipeline is triggered only when the git repository changes: if the environment does not match the desired state we do not have an automatic way to reconcile the state.

pushed-based deployment

The preferred way to update the environment is called pulled-based deployments. With this approach, we need a operator that continuously compares the current state and the desired state of the infrastructure. Whenever a difference has noticed operator updates the infrastructure in order to match the desired state.

pull-based deployment

In GitLab terms, this operator is called Agent.

Setting up GitLab Agent

Installation of Gitlab Agent Server (A.K.A. KAS) is very simple: we just edited the gitlab.rb configuration file to add this line gitlab_kas['enable'] = true. Then we launched the GitLab reconfiguration command gitlab-ctl reconfigure.

After that, we chose our kube-infrastructure git repository as a configuration repository. The configuration repository is the GitLab repository that holds the Agent configuration file. This file is located at this path .gitlab/agents/<agent-name>/config.yaml. We also had to grant other projects access to the Agent:

ci_access:
projects:
- id: path/to/project

At this point, from project sidebar Infrastructure > Kubernetes clusters, we registered a new Agent: the modal that appears contains the registration token and helm instruction to install the Agent on the K8s cluster.

Update CI/CD scripts

As we already mentioned, we have CD scripts to deploy our kustomize files to the K8s cluster. It was then reasonable to find a way to reuse our centralized CD scripts. This turned out to be possible by adding this line kubectl config use-context path/to/agent-configuration-project:your-agent-name to the deploy job.

deploy:
image:
name: bitnami/kubectl:latest
entrypoint: [""]
script:
- kubectl config use-context path/to/agent-configuration-project:your-agent-name
- kubectl apply -k ...

Missing features

While this new Agent-based approach overcomes security issues, it lacks a bunch of handy features that were included in the deprecated cluster certificates integration, namely:

  • see pods log
  • connect to pod shell
  • see pods resources

directly from the GitLab interface!

All these features are now deprecated, and the GitLab team is working to replace them with the new KAS integration. However, at the time of writing, those features are not available with KAS integration.

Conclusion

Migration from cluster certificate integration to KAS was not painful. KAS embraces GitOps principles and we did not need to search for another tool (ArgoCD, Flux Gitkube, etc..) to implement GitOps workflows.

In the next blog post, we will connect GitLab Agent to our kube-infrastructure git repository, where we version our Kubernetes infrastructure.

Useful links

--

--

Matteo Codogno
WellD Tech

I love to experiment with new technologies, open source projects and design Software architectures.