Install a Kubernetes cluster with Kubeadm on Ubuntu step by step
Environment
OS: Ubuntu 16.04 LTS
CPU: INTEL CORE I7–7700
MEMORY: 32 GB
Step 1: Prepare necessary tools
Change to root:
sudo su
Turn off swap:
swapoff -a
Edit the swap config:
vi /etc/fstab
Comment the last line:
# swap was on /dev/sda3 during installation
# UUID=9f163eae-1ec4-4ea7-bbbe-2911e8ccd62f none
Configure iptable:
vi /etc/ufw/sysctl.conf
Add following lines to the end:
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
Reboot for changes
Install dependency:
sudo su
apt-get install ebtables ethtool
Reboot again
Step 2: Install kubeadm
Change to root:
sudo su
Install https support:
apt-get update && apt-get install -y apt-transport-https
Get kubernetes repo key:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
Add kubernetes repo to manifest:
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
Install kubeadm and docker:
apt-get update && apt-get install -y kubelet kubeadm kubectl docker.io
Step 3: Create the cluster
Change to root:
sudo su
Initial the kubeadm:
kubeadm init --pod-network-cidr=192.168.0.0/16
After a while, you will get following output:
Your Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node
as root:kubeadm join 10.91.4.105:6443 --token alpia1.8cjc1yfv5ezganq7 --discovery-token-ca-cert-hash sha256:3f2da2fa1967b8e974b9097fcdd15c66e0d136db5b1f08b3db7fe45c3e2b790b
Change to regular user:
exit
Copy kubectl config:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install network plugin (Calico):
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
Taint the master node for allowing deployment:
kubectl taint nodes --all node-role.kubernetes.io/master-
Now let’s try with kubectl:
kubectl get node
NAME STATUS ROLES AGE VERSION
my-kubernete Ready master 83m v1.13.2
And deploy a pod:
kubectl run hello --image=k8s.gcr.io/echoserver:1.4 --port=8080
kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-5975cd9c9d-5pvsn 1/1 Running 0 120m
Step 4: Trouble shooting
If found the core dns is always crashed like that:
$ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-hrp2w 0/1 CrashLoopBackOff 542 1d
coredns-86c58d9df4-ptgsk 0/1 CrashLoopBackOff 543 1d
You need to manually delete the config in /etc/resolv.conf and create new one:
nameserver 8.8.4.4
nameserver 8.8.8.8
Then the DNS service in the cluster is working normally.
Step 5: Join a remote node
To be continued