Kubeadm on GCE

Stéphane Beuret
Nov 4, 2019 · 6 min read

To create a 3 nodes Kubernetes cluster on GCP does not seem, at first, very complicated to set up: we launch 3 instances of GCE, we pass a little kick of Kubeadm over and the tour is played, in 5 minutes the cluster is up and running!

And then we say, “Heck, I do not have a StorageClass, how do you put a StorageClass on GCP?” … And five minutes later: “No, I don’t have an Ingress Controller, how does that walk on GCP? “… Classic, and these are only a few small difficulties that we usually face. After all, Google, Stackoverflow and Github are our friends … a priori.

However, I don’t know if it’s because Google always puts forward GKE that I had so much trouble finding the information I wanted, or if it’s because few people want to use their own runtime ( eg containerd, Kata Containers) and adapt very well to the one provided by default; Anyway, collecting all this information took me longer than I thought (thank you Hofstadter).

So if other people are tempted by the bootstrap of a Kubernetes cluster of GCE with kubeadm, here are my notes, just to avoid some frustrations, and to waste a few hours searching in dozens of pages the information that their lack.

Disclaimer: I’m always busy with time, so don’t be surprised to find draft scripts that have not been tweaked yet, or even a small manual part when I use kubeadm: as I said, it’s just my notes, not a well-crafted cookbook. By cons, I use Terraform to set up the infrastructure (again, we can make optimizations, in my receipts but this is not the subject).

First step, configure the environment: you need a project in GCP (linked to a billing account), and a service account. Download the json file linked to your service account, then export it:

export GOOGLE_CLOUD_KEYFILE_JSON="~/my-beautiful-project-39276-3b4467fd2113.json"

Then here is the main.tf for terrform:

provider "google" {
project = "${var.project_id}"
region = "${var.region}"
zone = "${var.zone}"
}

resource "google_compute_firewall" "allow-kube-api" {
name = "allow-kube-api"
network = "default"

allow {
protocol = "tcp"
ports = ["6443"]
}

source_ranges = ["0.0.0.0/0"]
target_tags = ["kube-api"]
}

resource "google_compute_firewall" "allow-nodeports" {
name = "allow-nodeports"
network = "default"

allow {
protocol = "tcp"
ports = ["30000-32767"]
}

source_ranges = ["0.0.0.0/0"]
target_tags = ["nodeports"]
}

resource "google_compute_instance_template" "default" {
name = "kubernetes-nodes-instances"
description = "This template is used to create Kubernetes nodes instances."

tags = ["kube-api", "nodeports"]

labels = {
environment = "medium"
}

instance_description = "description assigned to instances"
machine_type = "n1-standard-2"
can_ip_forward = true

scheduling {
automatic_restart = true
on_host_maintenance = "MIGRATE"
}

// Create a new boot disk from an image
disk {
source_image = "debian-cloud/debian-9"
auto_delete = true
boot = true
}

network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
metadata = {
ssh-keys = "saphoooo:${file("id_rsa.pub")}"
}

metadata_startup_script = "${file("startup.sh")}"

service_account {
scopes = ["storage-full", "cloud-platform", "compute-rw", "logging-write", "monitoring", "service-control", "service-management"]
}
}

resource "google_compute_region_instance_group_manager" "default" {
name = "kubernetes-node-group-manager"
instance_template = "${google_compute_instance_template.default.self_link}"
base_instance_name = "kubernetes-node"
region = "europe-west2"
distribution_policy_zones = ["europe-west2-a", "europe-west2-b", "europe-west2-c"]
target_size = "3"
}

What is important to remember is that it is necessary to make a group_instance for your instances. In the same way that it is imperative to have firewall rules, and to put tags on your instances in agreement with them. With this information, create a cloud-config file:

[Global]
project-id = "my-beautiful-project-39276"
node-tags = nodeports
node-instance-prefix = "kubernetes-node"
multizone = true

As you can see, the network tag is found in node-tags, the base_instance_name in node-instance-prefix, and of course the project-id is your project-id.

For the curious who would like to know what I put in my startup script, nothing complicated: I install kubeadm, containerd and kata-containers.

#! /bin/bash
apt-get update
apt-get install libseccomp2 apt-transport-https curl -y
export VERSION="1.3.0"
wget https://storage.googleapis.com/cri-containerd-release/cri-containerd-${VERSION}.linux-amd64.tar.gz
tar --no-overwrite-dir -C / -xzf cri-containerd-${VERSION}.linux-amd64.tar.gz
echo \"overlay\nbr_netfilter\" >> /etc/modules
systemctl enable containerd
modprobe overlay
modprobe br_netfilter
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
export DEBIAN_FRONTEND=noninteractive
ARCH=$(arch)
BRANCH="${BRANCH:-master}"
source /etc/os-release
echo 'deb http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/Debian_${VERSION_ID}/ /' > /etc/apt/sources.list.d/kata-containers.list
curl -sL http://download.opensuse.org/repositories/home:/katacontainers:/releases:/${ARCH}:/${BRANCH}/Debian_${VERSION_ID}/Release.key | sudo apt-key add -
apt-get update
apt-get -y install kata-runtime kata-proxy kata-shim
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
sed -i '/containerd.untrusted_workload_runtime/{n;s/\"\"/\"io.containerd.kata.v2\"/}' /etc/containerd/config.toml
systemctl restart containerd

At this point, terraform apply creates 3 instances, each in their zone. I can connect directly with ssh, because I put my public key with terraform:

metadata = {
ssh-keys = "saphoooo:${file("id_rsa.pub")}"
}

For bootstrap my master node, I use the following configuration file (it is essential because it informs the cluster that I use the cloud provider gce, and that my configuration file is under /etc/kubernetes/cloud-config ):

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: medium.howtok5678songce
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: "/var/run/containerd/containerd.sock"
kubeletExtraArgs:
cloud-provider: "gce"
cloud-config: "/etc/kubernetes/cloud-config"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
clusterName: medium
kubernetesVersion: v1.16.2
networking:
podSubnet: 10.244.0.0/16
apiServer:
certSANs:
- 35.187.224.267
extraArgs:
authorization-mode: Node,RBAC
cloud-provider: "gce"
cloud-config: "/etc/kubernetes/cloud-config"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud-config"
mountPath: "/etc/kubernetes/cloud-config"
controllerManager:
extraArgs:
cloud-provider: "gce"
cloud-config: "/etc/kubernetes/cloud-config"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud-config"
mountPath: "/etc/kubernetes/cloud-config"

Also note certSANs where I put the public address of my node, so I can use kubectl from my own machine.

Then I copy gce.yaml on my instance, and cloud-config in /etc/kubernetes/.

$ sudo kubeadm init --config gce.yaml

Moments later, my master node is ready. I retrieve the cluster config file on my own machine, and I change the server address, replacing the internal address with the public address:

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://35.187.224.267:6443
name: medium
contexts:
- context:
cluster: medium
user: kubernetes-admin
name: kubernetes-admin@medium
current-context: kubernetes-admin@medium
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED

I then remove the master label from my node, and I install the network addon:

$ kubectl label node kubernetes-node-4244 node-role.kubernetes.io/master-
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

For the other two instances, I create a join.yaml file that contains the private ip address of the master node, as well as the token bootstrap:

apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: "10.154.0.62:6443"
token: medium.howtok5678songce
unsafeSkipCAVerification: true
nodeRegistration:
criSocket: "/var/run/containerd/containerd.sock"
kubeletExtraArgs:
cloud-provider: "gce"
taints: []

I copy this file on my two instances, not to mention my cloud-config file which I also put under /etc/kubernetes/. I finally connect to instances to join the cluster:

$ sudo kubeadm join --config join.yaml

Now, any service exposed via LoadBalancer will create a load balancer in GCP. As for the StorageClass, it is easy to find it on the internet, but I will avoid you looking for:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- europe-west2-a
- europe-west2-b
- europe-west2-c

And here, I hope that these notes will be useful to you. In my case, as I absolutely needed kata-containers, it was essential!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade