Kubernetes on Bare Metal: part 5, kubernetes master
Now we have all we need to install kubernetes 1.3.5.
Let’s start with downloading kubernetes binaries (this can take quite some time as these weight well over 1G). I assume we are in /opt/kubernetes directory on server gc01:
curl -L https://github.com/kubernetes/kubernetes/releases/download/v1.3.5/kubernetes.tar.gz -o kubernetes.tar.gz
tar xzvf kubernetes.tar.gz kubernetes/server/kubernetes-server-linux-amd64.tar.gz --strip-components=2
tar xzvf kubernetes-server-linux-amd64.tar.gz -C bin --strip-components=3
We should have all necessary binaries in the /opt/kubernetes/bin directory now.
We will designate gc01 as our master node and as a worker node, that means we are going to setup quite a lot of services:
- Controller manager — is a daemon that embeds the core control loops shipped with kubernetes. (master only)
- Apiserver — provides the frontend to the cluster’s shared state through which all other components interact. (master only)
- Scheduler — container scheduling and management. (master only)
- Proxy — distributed multi-tenant load-balancer.
- Kubelet — health check monitor.
Let’s start with Controller manager.
We will create systemd service file at /etc/systemd/system/kube-controller-manager.service:
[Unit]
Description=Kubernetes Controller Manager
Documentation=http://kubernetes.io/docs/admin/kube-controller-manager/
After=network.target
After=etcd.service
[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
--cluster-name=gloriouscloud \
--leader-elect=true \
--master=https://gc01.gloriouscloud.com:8443 \
--root-ca-file=/opt/kubernetes/certs/ca.pem \
--service-account-private-key-file=/opt/kubernetes/certs/client-key.pem \
--v=2 \
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
As you can see, it will communicate to the Apiserver at https://gc01.gloriouscloud.com:8443 which doesn’t exist at the moment. So let’s create it!
We need to define users consuming the api first.
Let’s create users.csv that contains definition of a user. It is a .csv file of following structure:
password,username,id
In our case the file /opt/kubernetes/users.csv will be:
catsarehumans,elcct,1
Next, we should create file with a token used to authenticate service user — structure is the same as in our users file, just instead of a password you will put an arbitrary token. We could use python for that:
# python -c "import uuid; print(uuid.uuid4().hex)"
27c6b4e586bb4d1fbf1bcec35aefa090
So our /opt/kubernetes/tokens.csv file will look like this:
27c6b4e586bb4d1fbf1bcec35aefa090,elcct,1
Next we will choose service cluster ip range. It shouldn’t clash with our flanneld ip range, so we will use 172.16.0.0/24 range.
We can now create systemd file for Apiserver at /etc/systemd/system/kube-apiserver.service:
[Unit]
Description=Kubernetes API Server
Documentation=http://kubernetes.io/docs/admin/kube-apiserver/
After=network.target
After=etcd.service
[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
--service-account-key-file=/opt/kubernetes/certs/client-key.pem \
--tls-cert-file=/opt/kubernetes/certs/client.pem \
--tls-private-key-file=/opt/kubernetes/certs/client-key.pem \
--bind-address=0.0.0.0 \
--secure-port=8443 \
--insecure-bind-address=127.0.0.1 \
--insecure-port=8080 \
--etcd-cafile=/opt/kubernetes/certs/ca.pem \
--etcd-certfile=/opt/kubernetes/certs/client.pem \
--etcd-keyfile=/opt/kubernetes/certs/client-key.pem \
--etcd-servers=https://gc01.gloriouscloud.com:2379,https://gc02.gloriouscloud.com:2379,https://gc03.gloriouscloud.com:2379,https://gc04.gloriouscloud.com:2379,https://gc05.gloriouscloud.com:2379 \
--token-auth-file=/opt/kubernetes/tokens.csv \
--basic-auth-file=/opt/kubernetes/users.csv \
--kubelet-certificate-authority=/opt/kubernetes/certs/ca.pem \
--allow-privileged=true \
--apiserver-count=1 \
--service-cluster-ip-range=172.16.0.0/24 \
--enable-swagger-ui=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
And time for Scheduler. We will create systemd file at /etc/systemd/system/kube-scheduler.service:
[Unit]
Description=Kubernetes Scheduler
Documentation=http://kubernetes.io/docs/admin/kube-scheduler/
[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \
--master=http://127.0.0.1:8080 \
--leader-elect=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Now we have all required services needed to run master node setup. Let’s start them :)
# systemctl daemon-reload
# systemctl enable kube-controller-manager
# systemctl enable kube-apiserver
# systemctl enable kube-scheduler
# systemctl start kube-controller-manager
# systemctl start kube-apiserver
# systemctl start kube-scheduler
Let’s validate if our master node services are running:
# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Unhealthy Get https://gc01.gloriouscloud.com:2379/health: remote error: bad certificate
etcd-1 Unhealthy Get https://gc02.gloriouscloud.com:2379/health: remote error: bad certificate
etcd-4 Unhealthy Get https://gc05.gloriouscloud.com:2379/health: remote error: bad certificate
etcd-3 Unhealthy Get https://gc04.gloriouscloud.com:2379/health: remote error: bad certificate
etcd-2 Unhealthy Get https://gc03.gloriouscloud.com:2379/health: remote error: bad certificate
You can see that there is bad certificate error. It seems like apiserver is not using specified cert and key when checking the etcd cluster. It is reported here:
i am trying to setup simple kubernetes cluster, with 3 master, each master running etcd2 instance in HTTPS mode. But…github.com
In the next part we will setup worker nodes.