Setup Kubernetes on CoreOS

Kasun de Silva
8 min readJan 24, 2017

--

This sample will guide you through to install Kubernetes 1.5.1 on bare-metal CoreOS.

You should have a CoreOS cluster already setup for this. Following this article to setup a CoreOS cluster.

As per the above article, lets assume that you have two node CoreOS cluster.

Master Node

First SSH into your CoreOS node 1 and we have to generate certificates. (Lets keep node 1 as the master node).

Lets create a directory as certs

$ mkdir certs
$ cd certs

Copy the following script into certs directory as cert_generator.sh .

#!/bin/bash
# Creating TLS certs
echo "Creating CA ROOT**********************************************************************"openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
echo "creating API Server certs*************************************************************"echo "Enter Master Node IP address:"
read master_Ip
cat >openssl.cnf<<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = 10.3.0.1
IP.2 = $master_Ip
EOF
openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
echo "Creating worker nodes"cat > worker-openssl.cnf<< EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = \$ENV::WORKER_IP
EOF
echo "Enter the number of worker nodes:"
read worker_node_num
for (( c=1; c<= $worker_node_num; c++ ))
do
echo "Enter the IP Address for the worker node_$c :"
read ip
openssl genrsa -out kube-$c-worker-key.pem 2048
WORKER_IP=$ip openssl req -new -key kube-$c-worker-key.pem -out kube-$c-worker.csr -subj "/CN=kube-$c" -config worker-openssl.cnf
WORKER_IP=$ip openssl x509 -req -in kube-$c-worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out kube-$c-worker.pem -days 365 -extensions v3_req -extfile worker-openssl.cnf
done
echo "Creating Admin certs********************************************************"
openssl genrsa -out admin-key.pem 2048
openssl req -new -key admin-key.pem -out admin.csr -subj "/CN=kube-admin"
openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out admin.pem -days 365
echo "DONE !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"

Make the script executable.

$ chmod +x cert_generator.sh

Run the script and it will ask you the node details. After that it will generate you the necessary certs for your Kubernetes installation.

Create the required directory and place the keys generated previously in the following location,

$ mkdir -p /etc/kubernetes/ssl

Copy following certs to /etc/kubernetes/ssl from certs.

/etc/kubernetes/ssl/ca.pem/etc/kubernetes/ssl/apiserver.pem/etc/kubernetes/ssl/apiserver-key.pem

Network Config

We will configure flannel to source its local configuration in /etc/flannel/options.env and cluster-level configuration in etcd. Create this file and edit the contents,

  • Replace ${ADVERTISE_IP} with this machine's publicly routable IP.
  • Replace ${ETCD_ENDPOINTS} ( in my case: http://192.168.19.247:2379,http://192.168.16.3:2379)
FLANNELD_IFACE=${ADVERTISE_IP}
FLANNELD_ETCD_ENDPOINTS=${ETCD_ENDPOINTS}

Then create the following drop-in, which will use the above configuration when flannel starts, /etc/systemd/system/flanneld.service.d/40-ExecStartPre-symlink.conf

[Service]
ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.env

Docker Configuration

In order for flannel to manage the pod network in the cluster, Docker needs to be configured to use it. All we need to do is require that flanneld is running prior to Docker starting.

Again, we will use a systemd drop-in, /etc/systemd/system/docker.service.d/40-flannel.conf

[Unit]
Requires=flanneld.service
After=flanneld.service
[Service]
EnvironmentFile=/etc/kubernetes/cni/docker_opts_cni.env

Create the Docker CNI Options file, etc/kubernetes/cni/docker_opts_cni.env

DOCKER_OPT_BIP=""
DOCKER_OPT_IPMASQ=""

If using Flannel for networking, setup the Flannel CNI configuration with below. /etc/kubernetes/cni/net.d/10-flannel.conf

{
"name": "podnet",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}

Create the kubelet Unit

The kubelet is the agent on each machine that starts and stops Pods and other machine-level tasks. The kubelet communicates with the API server (also running on the master nodes) with the TLS certificates we placed on disk earlier.

Create /etc/systemd/system/kubelet.service .

  • Replace ${ADVERTISE_IP} with this node's publicly routable IP.
  • Replace ${DNS_SERVICE_IP} with 10.3.0.10
[Service]
Environment=KUBELET_VERSION=v1.5.1_coreos.0
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf"
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=http://127.0.0.1:8080 \
--register-schedulable=false \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--container-runtime=docker \
--allow-privileged=true \
--pod-manifest-path=/etc/kubernetes/manifests \
--hostname-override=${ADVERTISE_IP} \
--cluster_dns=${DNS_SERVICE_IP} \
--cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Set Up the kube-apiserver Pod

The API server is where most of the magic happens. It is stateless by design and takes in API requests, processes them and stores the result in etcd if needed, and then returns the result of the request.

Create /etc/kubernetes/manifests/kube-apiserver.yaml

  • Replace ${ETCD_ENDPOINTS} with your CoreOS hosts.
  • Replace ${SERVICE_IP_RANGE} with 10.3.0.0/24
  • Replace ${ADVERTISE_IP} with this node's publicly routable IP.
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-apiserver
image: quay.io/coreos/hyperkube:v1.5.1_coreos.0
command:
- /hyperkube
- apiserver
- --bind-address=0.0.0.0
- --etcd-servers=${ETCD_ENDPOINTS}
- --allow-privileged=true
- --service-cluster-ip-range=${SERVICE_IP_RANGE}
- --secure-port=443
- --advertise-address=${ADVERTISE_IP}
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --runtime-config=extensions/v1beta1/networkpolicies=true
- --anonymous-auth=false
livenessProbe:
httpGet:
host: 127.0.0.1
port: 8080
path: /healthz
initialDelaySeconds: 15
timeoutSeconds: 15
ports:
- containerPort: 443
hostPort: 443
name: https
- containerPort: 8080
hostPort: 8080
name: local
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host

Set Up the kube-proxy Pod

We’re going to run the proxy just like we did the API server. The proxy is responsible for directing traffic destined for specific services and pods to the correct location. The proxy communicates with the API server periodically to keep up to date.

Both the master and worker nodes in your cluster will run the proxy.

All you have to do is create /etc/kubernetes/manifests/kube-proxy.yaml, there are no settings that need to be configured.

apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.1_coreos.0
command:
- /hyperkube
- proxy
- --master=http://127.0.0.1:8080
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
volumes:
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host

Set Up the kube-controller-manager Pod

Create /etc/kubernetes/manifests/kube-controller-manager.yaml. It will use the TLS certificate placed on disk earlier.

apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-controller-manager
image: quay.io/coreos/hyperkube:v1.5.1_coreos.0
command:
- /hyperkube
- controller-manager
- --master=http://127.0.0.1:8080
- --leader-elect=true
- --service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
- --root-ca-file=/etc/kubernetes/ssl/ca.pem
resources:
requests:
cpu: 200m
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
initialDelaySeconds: 15
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
readOnly: true
- mountPath: /etc/ssl/certs
name: ssl-certs-host
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-kubernetes
- hostPath:
path: /usr/share/ca-certificates
name: ssl-certs-host

Set Up the kube-scheduler Pod

The scheduler monitors the API for unscheduled pods, finds them a machine to run on, and communicates the decision back to the API.

Create File /etc/kubernetes/manifests/kube-scheduler.yaml.

apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-scheduler
image: quay.io/coreos/hyperkube:v1.5.1_coreos.0
command:
- /hyperkube
- scheduler
- --master=http://127.0.0.1:8080
- --leader-elect=true
resources:
requests:
cpu: 100m
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
initialDelaySeconds: 15
timeoutSeconds: 15

Load Changed Units

First, we need to tell systemd that we’ve changed units on disk and it needs to rescan everything,

$ sudo systemctl daemon-reload

Configure flannel Network

Earlier it was mentioned that flannel stores cluster-level configuration in etcd. We need to configure our Pod network IP range now. Since etcd was started earlier, we can set this now. If you don’t have etcd running, start it now.

  • Replace $POD_NETWORK with 10.2.0.0/16
  • Replace $ETCD_SERVER with one url (http://ip:port) from $ETCD_ENDPOINTS
$ curl -X PUT -d "value={\"Network\":\"$POD_NETWORK\",\"Backend\":{\"Type\":\"vxlan\"}}" "$ETCD_SERVER/v2/keys/coreos.com/network/config"

After configuring flannel, we should restart it for our changes to take effect. Note that this will also restart the docker daemon and could impact running containers.

$ sudo systemctl start flanneld
$ sudo systemctl enable flanneld

Start kubelet

Now that everything is configured, we can start the kubelet, which will also start the Pod manifests for the API server, the controller manager, proxy and scheduler.

$ sudo systemctl start kubelet

Ensure that the kubelet will start after a reboot:

$ sudo systemctl enable kubelet

Worker Node

Create the required directory and place the keys generated previously in the following location in the worker node as well,

$ mkdir -p /etc/kubernetes/ssl

Copy following certs to /etc/kubernetes/ssl from certs.

/etc/kubernetes/ssl/ca.pem/etc/kubernetes/ssl/kube-1-worker-apiserver.pem/etc/kubernetes/ssl/kube-1-worker-apiserver-key.pem

Network Config

Just like earlier we have to configure flannel to source its local configuration in /etc/flannel/options.env and cluster-level configuration in etcd. Create this file and edit the contents,

  • Replace ${ADVERTISE_IP} with this machine's publicly routable IP.
  • Replace ${ETCD_ENDPOINTS} ( in my case: http://192.168.19.247:2379,http://192.168.16.3:2379)
FLANNELD_IFACE=${ADVERTISE_IP}
FLANNELD_ETCD_ENDPOINTS=${ETCD_ENDPOINTS}

Then create the following drop-in, which will use the above configuration when flannel starts, /etc/systemd/system/flanneld.service.d/40-ExecStartPre-symlink.conf

[Service]
ExecStartPre=/usr/bin/ln -sf /etc/flannel/options.env /run/flannel/options.env

Docker Configuration

In order for flannel to manage the pod network in the cluster, Docker needs to be configured to use it. All we need to do is require that flanneld is running prior to Docker starting.

Again, we will use a systemd drop-in, /etc/systemd/system/docker.service.d/40-flannel.conf

[Unit]
Requires=flanneld.service
After=flanneld.service
[Service]
EnvironmentFile=/etc/kubernetes/cni/docker_opts_cni.env

Create the Docker CNI Options file, etc/kubernetes/cni/docker_opts_cni.env

DOCKER_OPT_BIP=""
DOCKER_OPT_IPMASQ=""

If using Flannel for networking, setup the Flannel CNI configuration with below. /etc/kubernetes/cni/net.d/10-flannel.conf

{
"name": "podnet",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}

Create the kubelet Unit

Lets create the kubelet service in the worker node as well.

Create /etc/systemd/system/kubelet.service .

  • Replace ${ADVERTISE_IP} with this node's publicly routable IP.
  • Replace ${DNS_SERVICE_IP} with 10.3.0.10
  • Replace ${MASTER_HOST} (in my case: https://192.168.19.247:443)
[Service]
Environment=KUBELET_VERSION=v1.5.1_coreos.0
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=${MASTER_HOST} \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--container-runtime=docker \
--register-node=true \
--allow-privileged=true \
--pod-manifest-path=/etc/kubernetes/manifests \
--hostname-override=${ADVERTISE_IP} \
--cluster_dns=${DNS_SERVICE_IP} \
--cluster_domain=cluster.local \
--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
--tls-cert-file=/etc/kubernetes/ssl/worker.pem \
--tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Set Up the kube-proxy Pod

All you have to do is create /etc/kubernetes/manifests/kube-proxy.yaml, there are no settings that need to be configured.

apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: quay.io/coreos/hyperkube:v1.5.1_coreos.0
command:
- /hyperkube
- proxy
- --master=${MASTER_HOST}
- --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/ssl/certs
name: "ssl-certs"
- mountPath: /etc/kubernetes/worker-kubeconfig.yaml
name: "kubeconfig"
readOnly: true
- mountPath: /etc/kubernetes/ssl
name: "etc-kube-ssl"
readOnly: true
volumes:
- name: "ssl-certs"
hostPath:
path: "/usr/share/ca-certificates"
- name: "kubeconfig"
hostPath:
path: "/etc/kubernetes/worker-kubeconfig.yaml"
- name: "etc-kube-ssl"
hostPath:
path: "/etc/kubernetes/ssl"

Set Up kubeconfig

In order to facilitate secure communication between Kubernetes components, kubeconfig can be used to define authentication settings. In this case, the kubelet and proxy are reading this configuration to communicate with the API.

Create /etc/kubernetes/worker-kubeconfig.yaml:

/etc/kubernetes/worker-kubeconfig.yaml

apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/worker.pem
client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-context
current-context: kubelet-context

Start Services

Now we can start the Worker services.

Load Changed Units

Tell systemd to rescan the units on disk:

$ sudo systemctl daemon-reload

Start kubelet, and flannel

Start the kubelet, which will start the proxy.

$ sudo systemctl start flanneld
$ sudo systemctl start kubelet

Ensure that the services start on each boot:

$ sudo systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /etc/systemd/system/flanneld.service.
$ sudo systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
To check the health of the kubelet systemd unit that we created, run systemctl status kubelet.service.

Congratulations you have now setup your Kubernetes cluster on CoreOS. Read Setting up kubectl next to work with your Kubernetes cluster.

--

--