kubeadm init/join and ExternalIP vs InternalIP

Alasdair Lumsden
3 min readDec 26, 2019

--

Photo by Ihor Dvoretskyi on Unsplash

I love Kubernetes, but I also want to make use of it on my own hardware. At EveryCity, we have a staff test and training environment running the open source Joyent Triton Datacenter stack, which supports Terraform, cloud-config, and much more besides.

To get started, I bootstrapped 1 master node and 3 worker nodes using Terraform and cloud-config. Each node has two network interfaces, one with a real public IP, one with a private IP in the rfc1918 range. So far so good.

To bootstrap my Kubernetes cluster, I decided to use kubeadm via the official documentation: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

The main incantation for this is:

# kubeadm init --pod-network-cidr=192.168.0.0/16

This nets you a kubeadm join command to run on the workers, and doing so will net you this cluster:

# kubeadm get nodes -o wide 
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
k8s-master Ready master 4h9m v1.17.0 185.x.x.20 <none>
k8s-worker-1 Ready <none> 4h3m v1.17.0 185.x.x.18 <none>
k8s-worker-2 Ready <none> 4h3m v1.17.0 185.x.x.19 <none>
k8s-worker-3 Ready <none> 4h2m v1.17.0 185.x.x.17 <none>

So far so good, right? Well, not quite. Our public IP is listed as the Internal-IP, and External-IP shows as <none>.

This isn’t great — first of all, it’s just wrong. Secondly, intra-cluster communication is going via the external network interfaces, which isn’t great for security.

I personally consider this unintuitive, and I’ve filed this bug — in the wrong bugtracker, hence it was closed — but I also received some exceptionally helpful feedback from Lubomir I. Ivanov (thank you!) regarding the issue.

So, how do we fix this?

tl;dr version: configure /etc/hosts so the hostname resolves to the internal IP prior to running kubeadm init/join, and it will use the internal IP instead of the external one. Further, pass --control-plane-endpoint=INTERNALIP to kubeadm init so we use the internal IP for the control plane.

I did this in my cloud-config run_cmd early on (for 10.x.x.x/8 — adjust as necessary):

 - echo $(hostname -i | xargs -n1 | grep ^10.) $(hostname) >> /etc/hosts

Additionally, specify the internal IP as the control-plane-endpoint to kubeadm init:

 - kubeadm init --pod-network-cidr=192.168.0.0/16 --control-plane-endpoint=$(hostname -i | xargs -n1 | grep ^10.)

An alternative to adjusting /etc/hosts is to specify the node-ip. This can be done one of two ways.

The first is to pass in an InitConfiguration (when creating a cluster) as a --config option:

apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "$INTERNALIP"
nodeRegistration:
kubeletExtraArgs:
node-ip: "$INTERNALIP"

And as a JoinConfiguration when joining a cluster:

apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-ip: "$INTERNALIP"

If that’s too much hard work, you can edit the node-ip after initialisation by passing --node-ip=INTERNALIP to kubelet. This can be done by editing /var/lib/kubelet/kubeadm-flags.env and running systemctl restart kubelet.service

Note that even after all this, External-IP will remain as “<none>”. Apparently that’s expected behaviour — you only get to see an External-IP if you’re running Kubernetes in a public cloud, not a private cloud 🤷‍♂️

So that was an interesting journey down a rabbit hole. Hope this post is helpful to you and saves you some time.

Time to configure MetalLB and Rook so I’ve got LoadBalancers and Storage!

--

--