Configuring Argo CD on a multi-node Hetzner Cloud
Recently I had the chance to set-up Argo CD on a Hetzner Cloud Platform. It is a reliable web hosting provider in Germany. They have also started providing servers in Arizona.
What is Kubernetes?
“Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.”
Moving on towards the configuration, we first configure the multi-node Kubernetes cluster.
Prerequisites
Familiarity with the concept of Kubernetes and Linux. The configurations are valid for Ubuntu 18.04, 18.10 LTS and Kubernetes v1.17.4.
1.1 Create Hetzner Cloud Servers
Create Hetzner cloud servers from the GUI and select appropriate SSH keys and create a network with 16
as netmask. Since, we are creating a multi-node cluster we will be creating a master node and multiple worker nodes (number of worker depends upon the needs). Moreover, the master node is created using CX11 while worker nodes use CX21.
- create floating IP; projects >> Floating IPs >> Add Floating IP
- create a network; projects >> networks >> Create Network
- SSH keys; projects >> security >> SSH Keys >> Add SSH Key
- API token (note it down as it can not be opened again); projects >> security >> API Tokens >> Generate API Token
1.2 Configure the Network of the servers
First, we update the servers after the creation. Log onto each servers(master and worker nodes) and run these commands
all$ apt-get update
all$ apt-get dist-upgrade
all$ reboot
Now, for network configuration. Create floating IP’s from the Hetzner GUI and on the worker nodes follow the steps: Create a file /etc/network/interfaces.d/60-floating-ip.cfg
and copy the following:
auto eth0:1
iface eth0:1 inet static
address <floating-IP>
netmask 32
Afterwards, restart the network:
worker-node$ systemctl restart networking.service
1.3 Configure Kubernetes on the Cluster
Kubernets cluster will be set-up using kubeadm. We are here limited by the interface provided by Hetzner. It’s team provides the tools that act as interface between Kubernetes and Hetzner Cloud. Configure Kubernetes cluster as external required by the Hetzner Cloud Control Manager. Create /etc/systemd/system/kubelet.service.d/20-hetzner-cloud.conf
on each server. Now, copy the following contents;
[Service]
Environment="KUBELET_EXTRA_ARGS=--cloud-provider=external"
For docker to use systemd cgroups, we set it up by creating /etc/systemd/system/docker.service.d/00-cgroup-systemd.conf
on each server.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
Reload the files to use the options:
systemctl daemon-reload
Run the following commands on each server:
all$ apt-get updateall$ apt-get install docker-ce kubeadm=1.17.4-00 kubectl=1.17.4-00 kubelet=1.17.4-00
You need to make sure that the system can actually forward traffic between the nodes and pods:
all$ cat <<EOF >>/etc/sysctl.conf
# Allow IP forwarding for kubernetes
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
EOFall$ sysctl -p
Now we setup the control plane, run the following commands on master node, it will configure kubernetes:
master-node$ kubeadm config images pull
master-node$ kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--kubernetes-version=v1.17.4 \
--ignore-preflight-errors=NumCPU \
--upload-certs \
--apiserver-cert-extra-sans <private-IP-of-master-node>
For ease of use configure the kubeconfig of the root user to use the admin config of the Kubernetes cluster:
master-node$ mkdir -p /root/.kube
master-node$ cp -i /etc/kubernetes/admin.conf /root/.kube/config
The cloud controller manager and the container storage interface require two secrets in the kube-system
namespace containing access tokens for the Hetzner Cloud API
master-node$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: hcloud
namespace: kube-system
stringData:
token: "<hetzner-api-token>"
network: "<hetzner-network-id>"
EOF
If the hcloud is set-up, then we move towards hcloud-csi:
master-node$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: hcloud-csi
namespace: kube-system
stringData:
token: "<hetzner_api_token>"
EOF
Now deploy the Hetzner Cloud Controller Manager and Cluster Networking:
master-node$ kubectl apply -f https://raw.githubusercontent.com/hetznercloud/hcloud-cloud-controller-manager/master/deploy/ccm-networks.yamlmaster-node$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Deploy the Hetzner Cloud Container Storage Interface to the Cluster:
master-node$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.14/pkg/crd/manifests/csidriver.yamlmaster-node$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.14/pkg/crd/manifests/csinodeinfo.yamlmaster-node$ kubectl apply -f https://raw.githubusercontent.com/hetznercloud/csi-driver/v1.4.0/deploy/kubernetes/hcloud-csi.yml
In order to join the worker nodes, run the following command:
master-node$ kubeadm token create --print-join-command
1.4 Installing ArgoCD on the Kubernetes Cluster
ArgoCD runs as an operator and we have to deploy it on kubernetes. We need a namespace argocd
that runs it's services. Create ArgoCD namespace:
kubectl create namespace argocd
Install ArgoCD in the kubernetes' ArgoCD namespace:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Configure ArgoCD CLI with username and password as admin
:
kubectl -n argocd patch secret argocd-secret \
-p '{"stringData": {"admin.password": "$2a$10$mivhwttXM0U5eBrZGtAG8.VSRL1l9cZNAmaSaqotIzXRBRwID1NT.",
"admin.passwordMtime": "'$(date +%FT%T)'"
}}'argocd login localhost:10443 --username admin --password admin --insecure
Expose ArgoCD UI:
kubectl port-forward svc/argocd-server -n argocd <port>:443 2>&1 > /dev/null &
Now you can access Argo CD UI on “localhost:8000”. Username and password are both “admin”.
Click create application and follow the procedure to add a repository. The project must contain deployment files. A sample can be found at Argo CD Examples. The project also contains different ways of writing the deployment files.
Click on the “sync” button and then the card to see the deployment.
In the next article I will explain application access using Ingress and how to set up secrets to access private repositories and also the monitoring tool Prometheus.