Kubernetes CKA hands-on challenge #5 Manage Certificates
#####################################
THIS CHALLENGE WON’T BE UPDATED HERE AND MOVED TO:
https://killercoda.com/killer-shell-cka
######################################
Content
- Multi Container Issue
- Scheduler Playground
- Advanced Scheduling
- Node Management
- Manage Certificates
- Pod Priority
- RBAC
- >>> MORE <<<
Rules!
- be fast, avoid creating yaml manually from scratch
- use only kubernetes.io/docs for help.
- check my solution after you did yours. You probably have a better one!
Notices
- This challenge was tested on k8s 1.18. Please let us know should you encounter any issues in the comments
- how to be fast with Kubectl ≥ 1.18
Scenario Setup
You will start a two node cluster on your machine, one master and one worker. For this you need to install VirtualBox and vagrant, then:
git clone git@github.com:wuestkamp/cka-example-environments.git
cd cka-example-environments/cluster1
./up.shvagrant ssh cluster1-master1
vagrant@cluster1-master1:~$ sudo -i
root@cluster1-master1:~# kubectl get node
You should be connected as root@cluster1-master1
. You can connect to other worker nodes using root, like ssh root@cluster1-worker1
If you want to destroy the environment again run ./down.sh
. You should destroy the environment after usage so no more resources are used!
Todays Task: Manage Certificates
Check the expiry date and renew the etcd
server certificate, without the help of kubeadm
.
Solution
The following commands will be executed as root@cluster1-master1
:
alias k=kubectl
1.
First we check how etcd is setup:
ps aux | grep etcd # there is a process
ps -o ppid= -p 6765 # find the parent
ps aux | grep 6663 # shows parent is containerd
So etcd must be running using docker:
k -n kube-system get pod | grep etcd # static pod!
vim /etc/kubernetes/manifests/etcd.yaml
Let’s find the wanted certificate:
...
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://192.168.101.101:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --initial-advertise-peer-urls=https://192.168.101.101:2380
- --initial-cluster=cluster1-master1=https://192.168.101.101:2380
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://127.0.0.1:2379,https://192.168.101.101:2379
- --listen-metrics-urls=http://127.0.0.1:2381
- --listen-peer-urls=https://192.168.101.101:2380
- --name=cluster1-master1
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
...
Head to the Kubernetes Docs and search for “openssl” to find example commands!
Also helpful here: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md
Check the expiry date:
openssl x509 -noout -text -in /etc/kubernetes/pki/etcd/server.crt
Start the renewal:
cd /etc/kubernetes/pki/etcd
mv server.crt server.crt.old
Create an openssl config:
cat > openssl-etcd.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = cluster1-master1
DNS.2 = localhost
IP.1 = 192.168.101.101
IP.2 = 127.0.0.1
IP.3 = 0:0:0:0:0:0:0:1
EOF
This creates the config necessary for creating the CSR. The IP addresses and DNS were displayed when checking the expiration date. Next we create the CSR:
openssl req -new -key server.key -subj "/CN=cluster1-master1" -out server.csr -config openssl-etcd.cnf
We use the same CN as in the old certificate. And we create the new CRT by signing the CSR:
openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -extensions v3_req -extfile openssl-etcd.cnf -days 10000
And check the new expiration:
openssl x509 -noout -text -in /etc/kubernetes/pki/etcd/server.crt
Restart etcd and make sure everything still works:
cd /etc/kubernetes/manifests
mv etcd.yaml ..
sleep 5
mv ../etcd.yaml .
Wait a minute or two for the api-server to be back up and ready, then:
k get node,pod # should be back to normal!
I don’t think you’ll need to dig that deep into certificates in the CKA exam. But it's always good to learn a bit more than necessary :)