Setting up Standalone Kubernetes Cluster behind Corporate Proxy on Ubuntu 16.04

ankur garg
4 min readFeb 11, 2018

--

I started running into problems such as stuck on downloading control planes and then master not ready and similar a newbie might face so I’ll try to detail the process to successfully set up kubernetes cluster along with the problems faced by me while setting it up some of which are still open for me.

Setting up docker (ubuntu 16.04 only)

(warning : change inverted commas in commands while copy pasting :P)

Install the maximum supported version of docker by kubernetes

Create directory docker.service.d

sudo mkdir /etc/systemd/system/docker.service.d

Edit /etc/systemd/system/docker.service.d/http-proxy.conf

[Service]

Environment=’HTTP_PROXY=http://proxy-host:proxy-port/

Environment=’NO_PROXY=localhost,127.0.0.0/8, localip_of_machine’

sudo systemctl daemon-reload
sudo systemctl restart docker

Preparing for setup

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -echo deb http://apt.kubernetes.io/ kubernetes-xenial main >> /etc/apt/sources.list.d/kubernetes.listapt-get update -yapt-get install -y kubelet kubeadm kubectl kubernetes-cni

Proxy Setup

With everything installed I hope :), I next set the proxy up with http_proxy and https_proxy (lower and uppercase environment variables) pointing to the proxy server, and no_proxy set to IPs that should not go through the proxy server. For this system, no_proxy had the host IP, 127.0.0.1, and then the IPs for the IPv4 pool and IPs for the service IPs. The defaults use large subnets, so I reduced these to help make the no-proxy setting more manageable.

For the IPv4 pool, I’m using 192.168.0.0/24 (reduced size from default), and for the service IP subnet, I’m using 10.96.0.0/24. I used these lines in .bashrc to create the no_proxy setting (gedit .bashrc):

export http_proxy=http://proxy-host:proxy-port/
export HTTP_PROXY=$http_proxy
export https_proxy=$http_proxy
export HTTPS_PROXY=$http_proxy
printf -v lan '%s,' localip_of_machine
printf -v pool '%s,' 192.168.0.{1..253}
printf -v service '%s,' 10.96.0.{1..253}
export no_proxy="${lan%,},${service%,},${pool%,},127.0.0.1";
export NO_PROXY=$no_proxy

Make sure you’ve got these environment variables sourced (source .bashrc).

Are We There Yet?

Hopefully, you have everything prepared. If so, here are the steps used to start things up (as root user!):

kubeadm init --apiserver-advertise-address=localip_of_machine --service-cidr=10.96.0.0/16

This will display the kubeadm join command, for other nodes to be added to cluster (I have tried adding other node and running multiple pods on the cluster).

kubectl taint nodes --all node-role.kubernetes.io/master-export KUBECONFIG=/etc/kubernetes/kubelet.confexport KUBECONFIG=/etc/kubernetes/admin.confkubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yamlkubectl get pods --all-namespaces -o wide

At this point (after some time), you should be able to see that all the pods are up, and have and IP address of the host, except for the DNS pod, which will have an IP from the 192.168.0.0/24 pool:

root@khost2:~# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-etcd-jdc2l 1/1 Running 0 1m 10.5.20.247 khost2
kube-system calico-kube-controllers-d554689d5-8svs7 1/1 Running 0 1m 10.5.20.247 khost2
kube-system calico-node-m486p 2/2 Running 0 1m 10.5.20.247 khost2
kube-system etcd-khost2 1/1 Running 0 8m 10.5.20.247 khost2
kube-system kube-apiserver-khost2 1/1 Running 0 7m 10.5.20.247 khost2
kube-system kube-controller-manager-khost2 1/1 Running 1 7m 10.5.20.247 khost2
kube-system kube-dns-6f4fd4bdf-l2v4v 3/3 Running 0 7m 192.168.68.78 khost2
kube-system kube-proxy-dx89g 1/1 Running 0 7m 10.5.20.247 khost2
kube-system kube-scheduler-khost2 1/1 Running 0 7m 10.5.20.247 khost2

You can also check that the services are in the service pool defined:

root@khost2:~# kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m <none>
kube-system calico-etcd ClusterIP 10.96.232.136 <none> 6666/TCP 6m k8s-app=calico-etcd
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 12m k8s-app=kube-dns

Now, you should be able to use kubectl to apply manifests for containers (I did one with NGINX), and verify that the container can ping other containers, the host, and other nodes on the host’s network.

Problems discovered in the initial phase

  1. Do you have $KUBECONFIG pointing correctly (Git source)
export KUBECONFIG=/etc/kubernetes/kubelet.conf

2. To Turn off swap in kubernetes (git source)

Edit /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (fail-swap-on=false)

Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

If above does not work

sudo swapoff -a (disables swap for this session)sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab (comments out swap in file disabling permanently)

3. Getting error on kubectl get node (Git Source)

export KUBECONFIG=/etc/kubernetes/kubelet.conf

4. Unable to join node to cluster (x509: certificate has expired or is not yet valid])

Timing of the node machine does not match with that of the master node.

5. Setting up weave CNI plugin instead of calico (Did not work for me)

I tried setting up weave initially but it did not work. I also had issues removing this plugin as even after kubeadm reset the traces of weave were there on the machine causing problems.

I used the exact process described in the setup blog ->

sysctl net.bridge.bridge-nf-call-iptables=1export kubever=$(kubectl version | base64 | tr -d '\n')kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"

6. For error log

journalctl -xeu kubeletsystemctl status kubelet

Nice visual explanation of Kubernetes custer

https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16

--

--