Completing the circle (K8s)

Making a Kubernetes cluster functional with a few extra add-ons

Ani Sinanaj
Oct 13 · 8 min read
Photo by JR Korpa on Unsplash

In the previous article, I explained how to set up a HA K8s cluster.

But it’s just a bare installation of Kubernetes.

What’s missing

After setting up the cluster, except for K8s, it won’t have much on it. To make it work we need to deploy some extra applications.

The first and most important one is the CNI. We configured Kubernetes to use Flannel so now we have to add it to the cluster, otherwise nothing will work. In fact, checking the nodes they’ll result to be “Not Ready”.

kubectl get nodes -o wide

To add Flannel the command is very simple.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

To use kubectl from your own computer, copy /etc/kubernetes/admin.conf from the first control plane, to ~/.kube/config

Now that we have configured the network, we can add the storage provisioners.

In the last article I assumed to have a Ceph cluster, thus I’ll deploy the provisioners for that, keeping in mind that it is already configured for K8s on its end.

I’ll start by cloning the official repository for storage extensions.

git clone git@github.com:kubernetes-incubator/external-storage.git

I’ll focus on the ceph folder which has two different provisioners, one for RBD which represents a block storage and the other one is FS which is simply iSCSI.

I like having both because they work in different ways and are both useful in different situations.

RBD
First thing to do is create a couple of secret resources which will hold the keys to access the Ceph cluster. The file already exists, we just need to fill in the information in ./ceph/rbd/examples/secrets.yaml

The file should look something like this. By default the namespace is kube-system but I changed it to storage If you follow my example make sure to also change the namespace on ./ceph/rbd/deploy/rbac/rolebinding.yaml and ./ceph/rbd/deploy/rbac/clusterrolebinding.yaml

apiVersion: v1
kind: Secret
metadata:
name: ceph-admin-secret
namespace: storage
type: "kubernetes.io/rbd"
data:
key: QVFDMHFJWmNSay9ZSnhBQXlhSFhTOFo5a3hKODE1ZUdQWVRYYmc9PQ==
---
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: storage
type: "kubernetes.io/rbd"
data:
key: QVFCakxMNWNlcUI4QmhBQTd2UUNDNFNSaTk0ZDgvMTNXOUdMemc9PQ==

To get the keys from the Ceph cluster run these commands on one of its nodes.

ceph auth get-key client.admin | base64ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
ceph auth get-key client.kube | base64

The last thing to edit is ./ceph/examples/class.yaml which should look like below. Set your Ceph cluster IP address under monitors and the references to the secrets created above.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: rbd
provisioner: ceph.com/rbd
parameters:
monitors: 116.202.35.140:6789,116.202.35.141:6789,116.202.35.142:6789
pool: kube
adminId: admin
adminSecretNamespace: storage
adminSecretName: ceph-admin-secret
userId: kube
userSecretNamespace: storage
userSecretName: ceph-secret
imageFormat: "2"
imageFeatures: layering

In the end we need to apply the configuration files to the cluster.

kubectl apply -f ./ceph/deploy/rbac
kubectl apply -f ./ceph/examples/secrets.yaml
kubectl apply -f ./ceph/examples/class.yaml

You can use the claim.yaml and test-pod.yaml files under examples to test RBD.

CephFS
Now to set up CephFS the steps are similar. First we’ll create this secret, or use the one created above and just copy it in the cephfs namespace.

apiVersion: v1
kind: Secret
metadata:
name: ceph-admin-secret
namespace: cephfs
type: "kubernetes.io/rbd"
data:
key: QVFDMHFJWmNSay9ZSnhBQXlhSFhTOFo5a3hKODE1ZUdQWVRYYmc9PQ==

For CephFS, the admin-secret is the only one needed.

Now to create the roles and apply the secret.

kubectl apply -f ./cephfs/deploy/rbac
kubectl apply -f ./cephfs/example/secret.yaml

Last thing to edit is the storage class configuration that’s found in ./cephfs/example/class.yaml by adding the correct IP addresses to the Ceph cluster and the reference to the secret created above.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 116.202.35.140:6789,116.202.35.141:6789,116.202.35.142:6789
adminId: admin
adminSecretName: ceph-secret-admin
adminSecretNamespace: cephfs
claimRoot: /pvc-volumes

Then we can just apply it.

kubectl apply -f ./cephfs/example/class.yaml

And that’s all it is.

Out of the box, a K8s cluster doesn’t handle HTTP(s) requests. We need to configure a reverse-proxy and/or load balancer. These resources are called Ingress-controllers.

The Ingress-controller’s job is to route traffic to the desired Pod according to the associated Ingress rule.

There are different solutions to use as an Ingress-controller, Nginx, HAProxy, Istio, Traefik. Traefik comes with Let’s Encrypt incorporated. You can also use multiple controllers.

Nginx
I’m a fan of Nginx, so I’m going with it. Install Nginx as follows

helm install stable/nginx-ingress --name nginx-ingress-controller

Now to deploy a sample project to check if the Ingress-controller is doing its job

kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/docs/tutorials/acme/quick-start/example/deployment.yamlkubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/docs/tutorials/acme/quick-start/example/service.yaml

Now we’ll deploy the Ingress for the above, remembering to edit the hostname to our own hostname.

kubectl create --edit -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/docs/tutorials/acme/quick-start/example/ingress.yaml

Now going to that domain, you should see Kuard.

If you’re having trouble making it work, it’s probably because the ingress-controller doesn’t know if it has available public IPs and how to use them. Try adding “externalIPs” to your Ingress-controller with an array of all the public IPs of the cluster. MetalLB takes care of that automatically.

Let’s Encrypt
As far as I know, the most used Let’s Encrypt implementation for Kubernetes is cert-manager.

It is maintained and developed by JetStack. The installation is widely documented on many websites and on Github.

The first thing to do is to create the Custom Resource Definition.

kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.9/deploy/manifests/00-crds.yaml

Then we’ll create a dedicated namespace.

kubectl create namespace cert-manager

The following command will disable resource validation.

kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true

Then to install it we need to add the Helm repo and proceed with the installation.

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
--name cert-manager \
--namespace cert-manager \
--version v0.9.1 \
jetstack/cert-manager

Finally we need to create 2 certificate issuers, one for staging and one for production. The only thing to update is the email.

kubectl create --edit -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/docs/tutorials/acme/quick-start/example/staging-issuer.yamlkubectl create --edit -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/docs/tutorials/acme/quick-start/example/production-issuer.yaml

This should give us a working environment.

Now to make our Kuard example use Let’s Encrypt, we should re-deploy its Ingress with a few. The file below containes the tls key in the configuration which specifies the domain the certificate should work for. Note also the annotations which determine which issuer to use.

kubectl create --edit -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/docs/tutorials/acme/quick-start/example/ingress-tls-final.yaml

Fortunately the Kubernetes world has evolved enough to have a package manager. In fact, Helm is exactly that and is very easy and straight forward to set up.

Install helm on your system

brew install kubernetes-helm

Give it access to the cluster by creating and then applying this file

apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system

Let’s say we called it rbac-config.yml

kubectl create -f rbac-config.yaml
helm init --service-account tiller --history-max 200

Good to go!


Issues

Sometimes, depending on the cloud provider, your resolve.conf file may have more than 3 records. That may be an issue because both kube-dns and coredns can’t resolve the domain names if that’s the case. It is not an issue with the two services per se, but it is related to Linux’s libc as stated here

Creating an on-premises cluster has different limits with the first one being scalability. It is way more difficult to scale these servers, you’d have to buy the hardware or buy an extra node and so on.

Along with scalability come costs, since you have physical machines, you can’t scale down on demand. Which means you’ll pay for the full infrastructure even if you use a small percentage of it.

Load balancing is another limit. Configuring MetalLB or something similar is not always possible. It can be configured on the actual K8s nodes but that would add an overhead both on the network and the cluster itself. If it were to be managed outside of the K8s cluster (as we did with the storage), we’d need at least 3 more servers to keep the high availability. The load balancing needs to be done both on the control plane and worker nodes respectively for the control plane requests and for the applications requests. Another way to solve this as we discussed above is to use a DNS level load balancer, which works but isn’t exactly that fast.


Monitoring & Management

There are a lot of parts here and things may fail. That’s why we need to monitor the system continuously in order to act fast if something goes wrong. Along with monitoring tools, we need to make it easier for ourselves to manage everything.

Helps us install and update tools and software that’s needed to complete the cluster.

Install as described above.

Rancher is a tool to manage 1 or multiple Kubernetes clusters at the same place.

It can be installed anywhere, not necessarily on the cluster itself. If you do install it on the cluster, it will recognise it, and configure itself to make it possible for you to manage that cluster.

You can install it as a docker container like below

docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher

Or to install it on the K8s cluster execute this command. Update hostname and email so the certificate is created correctly. (This assumes you’re using cert-manager)

helm repo add rancher-latest https://releases.rancher.com/server-charts/latesthelm install rancher-latest/rancher \
--name rancher \
--namespace cattle-system \
--set hostname=rancher.my.org \
--set ingress.tls.source=letsEncrypt \
--set letsEncrypt.email=me@example.org

You’re good to go as soon as the deployment’s done.

Now that we have Rancher, we can install Prometheus and Grafana. These are both tools that monitor the cluster in terms of hardware resources and can be configured to send alerts in case a something’s wrong.

These can be installed also through Helm. And that’s exactly what Rancher does. It comes with Helm built in it so we can install packages on all the clusters we manage through Rancher.

Deploying them through Rancher though, integrates the metrics into the Rancher’s panel.

Monitoring and alerting from the cluster is good but it is not enough. What happens if for some reason the whole cluster isn’t accessible, maybe a huge network malfunction. For this reason we also need external monitoring.

For this purpose I’ve used both Node Query and Status Cake. Both are similar when it comes to monitoring server resources. You install an agent in the host which periodically sends out the system status.

Status Cake can also be used to ping the different services and have it configured to send alerts when they’re not responding. It can be configured for all kinds of errors, 40x, 50x and so on as well as time out.

Other management tools worth noting are of course the Kubernetes Dashboard and K9s

To install the dashboard, apply its configuration to the cluster and then create the credentials as below

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yamlkubectl create clusterrolebinding kubernetes-dashboard \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:kubernetes-dashboard

To use it, first get the access token

kubectl get secrets --namespace kube-system #to find how it's calledkubectl describe secrets --namespace kube-system kubernetes-
dashboard-token-9gz66

Since by default it isn’t exposed through an ingress, you can just use kubectl proxy and then open this link.

K9s is very simple to install, on Mac it’s as follows

brew install derailed/k9s/k9s

To use it just run k9s from the terminal.



I hope I covered everything. Thanks for having the patience to read this far.
Stay tuned for more :)

Ani Sinanaj

Written by

#tourist #nomad | https://progress44.com

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade