Kubernetes-cluster: production ready

Konstantin Makarov
Scum-Gazeta
Published in
7 min readOct 26, 2021

I had a childhood dream — to have my own Kubernetes cluster, and I fulfilled it!

Today’s tech stack

  • Kubernetes
  • Helm
  • Terraform
  • Ansible
  • Lens
  • Argo-CD
  • Prometheus
  • Loki
  • Traefik
  • Gitlab

Let`s rock!

I have long wanted to use the Kubertnetes cluster for my own purposes. But I didn’t want to use something artificial and single-noded.

And now I have matured, my pet projects have matured and I thought about the cluster.

Of course, the easiest start was with managed solutions.

By that time, I already used the services of a cloud provider, but their solutions with a Kubernetes cluster turned out to be too expensive for pet projects.

The choice fell on low-cost hosters. And a friend, a DevOPS engineer, suggested Hetzner to me.

Ordering virtual servers is not a problem, but who will configure the cluster?

The Kubernetes community has already prepared an answer to this question:

Kubespray!

This is a set of Ansible roles for automated cluster configuration.

The only thing left to do is to write manifests for the deployment of services. We can do it ourselves :)

Creating nodes for the cluster

Terraform

Here again I got acquainted with the amazing Terraform tool that provide the idea of “infrastructure as a code”.

The process of creating our infrastructure is that we describe its configuration (server, network, firewall), the token of the cloud provider and we get a ready-made, but bare cluster.

Here is my repository. Customize it for you:

https://github.com/ihippik/terraform-hetzner

Later, you can create additional assets manually, but this is a bit century — please describe your infrastructure in Terraform.

Creating a Kubernetes cluster

We go to the repository in Kubespray and see Quick Start.

https://github.com/kubernetes-sigs/kubespray

There are a few simple things to do here:

  • install dependencies (Ansible, etc.)
  • make a copy for the new cluster from the example
  • specify the IP addresses of the nodes created at first stage (together with the master-node)
  • if we are good at the subject, then we can adjust the cluster more accurately
  • run the Ansible scripts under the root

It is important that under the root you can get connected to your nodes via ssh. Your key must be on the nodes. Luckily for us, Terraform provides this.

Do not be alarmed if at startup there are some non-critical errors — I also had them, but despite this my cluster was working.

Now you can grab your cluster configuration from the master node:

cat /etc/kubernetes/admin.conf

With this configuration, you can connect to your Kubernetes cluster from your local computer (remember to replace localhost with the IP master node)

We now have a fully operational cluster. Let’s set it up now.

Attention

This is the most mysterious stage in this entire article. Please follow the instructions on GitHub carefully. At this stage, I made one serious mistake, and I spent a lot of time finding and fixing it:

In the cluster settings, I have specified external IP addresses everywhere in the file. And the cluster still worked, but I could not get access to it from the outside. Only the help of qualified specialists corrected the situation.

Setting up a Kubernetes cluster

Lens

For the first start, it will be quite convenient to use the Lens — IDE to manage the cluster.

It is also, of course, nice to be able to work with the cluster from the console:

We add cluster configuration that we downloaded from the master node, connect to our cluster and then can use this wonderful product.

Prometheus

Since we will need to monitor not only our services, but also our cluster, we can use Prometheus using kube-prometheus-stack.

https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack

But not Prometheus itself will be deployed in the cluster, a separate Pod, which will further deploy Prometheus and alertmanager.

Our action:

If you want, you can set up an alert about various cluster incidents in the alertmanager.

Loki

We will store and view the logs in Loki, collect them through promtail, and again view them together with our monitoring dashboards in Grafana.

  • helm repo add loki https://grafana.github.io/loki/charts
  • helm repo update
  • kubectl create namespace loki-stack
  • namespace/loki-stack created
  • helm upgrade — install loki — namespace=loki-stack loki/loki-stack

Next, in Grafana, add a new Loki source and point it to the location of the service. The format is classic for Kubernetes:

[service]. [Namespace]: [port]

We get: http: //loki.loki-stack: 3100

And that’s all. From now on, you can view the logs of your Pods in Explorer or make convenient dashboards for them.

Now. The cluster is ready to receive guests. Guests from Gitlab.

Gitlab

Many years ago, I was to shy to share my code with the world and used the private repositories of Bitbucket, then Gitlab. Now I’m not shy, but the code is under the NDA, so i continue there :)

For me, this is the most familiar environment.

Runner

We will definitely want to build our projects automatically, for this dirty work we need Gitlab-Runner.

Install it via Helm: https://docs.gitlab.com/runner/install/kubernetes.html

I did this directly from Lens in the Apps section by specifying the Gitlab url and the token in the values.

Pull-Secret

If you work with private Gitlab repositories, then kubelet will need to somehow log into the docker registry, from where you will offer him to download application images. To do this, you need to create an object of type Secret.

  • create a deploy token
  • get base-64 string from it

we form a secret that we will use in our manifests

Preparing the cluster for work

Persistent volume

If we do not want our data to disappear after restarting the pods of our services, we will have to use persistent storages.

We will use the driver kindly provided by Hetzner.

Details here:

Now let’s ask the driver for a small area for our experiments. At least we can order 10Gb.

If you need a more flexible solution, try using:

Traefik

In order to send traffic to the cluster we need to work with Ingresses. The most common solution is NGINX Ingress Controller, but this is a matter of taste and we will use Traefik here since we are Gophers :)

You could also use Hetzner’s balancer but it is paid.

Install it as a DaemonSet:

Next, we can already use a custom tool for flexible routing.

Since I relocate the site, I already had a certificate for https access to the site.

I already needed keys and chains obtained from LetsEncrypt. I just added them to the cluster. But remember that you will have to update them regularly.

But the correct solution would be to configure Traefik’s resolver, which will receive and renew certificates for you:

Argo-CD

Since we are modern people, we will use modern GitOps tools. Argo-CD is a continuous delivery tool for Kubernetes.

Put single, it will monitor our manifests and as soon as they change, it will try to apply them in our cluster.

Everything is simple here:

  • create a namespace and deploy Argo there
  • make a port-forward for the UI service and go in (we can see the login password in secrets)
  • if you are using a private repository, like me, then you will need to suggest how to connect to it. I used Deploy Token.
  • create an application

Argo-CD will now monitor the delivery of our applications. Don’t forget to register IngressRoute to make Argo-CD access nice and convenient.

Conclusion

Congratulations, now we have our own cluster with the required minimum for work!

For the cluster, I took inexpensive servers, up to 6 euros — for a home project, it is enough. And our cluster costs less than 20 euros per month.

PS: For more professional solutions you need to approach the question more carefully. For example, set up a backup of the etcd storage on the master node. Etc.

But for home use it doesn’t cost us anything to complete all the steps in this article again if our cluster breaks :)

--

--