Kubernetes the hard way on bare metal/VMs — Testing everything

Part of the Kubernetes the hard way on bare metal/VM

Drew Viles
4 min readDec 14, 2018
Kubernetes Logo

Introduction

This guide is part of the Kubernetes the hard way on bare metal/VMs tutorial I have written. On its own, it may be useful however it’s tailored for this tutorial so may not be completely suited to your needs.

Smoke test encryption

Create a secret

kubectl create secret generic kubernetes-the-bm-hard-way -- from-literal="mykey=mydata"

Now print a hexdump of it (run this one command on a controller — or all of them)

sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem\
/registry/secrets/default/kubernetes-the-bm-hard-way | hexdump -C

The output should be a hexdump and on the right, be prefixed with:

/registry/secrets/default/kubernetes-the-bm-hard-way.k8s:enc:aescbc:v1:key1

If not, there is an issue with encryption.

Test the lot!

So all that’s left it’s the POC to ensure all is working as one would expect. You can run all of this on the remote PC you configured during remote access now.

Confirm deployments work

kubectl run nginx --image=nginx
kubectl get pods -l run=nginx

Confirm port forwarding works

POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")kubectl port-forward $POD_NAME 8080:80##Then in another session…
curl --head http://127.0.0.1:8080

You can close the new session and cancel the port forwarding now.

Check logs work. You should the output of the curl command you just ran.

kubectl logs $POD_NAME#Results
127.0.0.1 - - [05/Dec/2018:19:10:37 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-"

Check you can execute commands against pods

kubectl exec -ti $POD_NAME -- nginx -v#Results
nginx version: nginx/1.15.7

Check services can be created and exposed

kubectl expose deployment nginx --port 80 --type NodePortNODE_PORT=$(kubectl get svc nginx --output=jsonpath=’{range .spec.ports[0]}{.nodePort}’)

On Google, you’d create a firewall rule that allows remote access to the nginx node port.

EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format ‘value(networkInterfaces[0].accessConfigs[0].natIP)’)

You can’t do this so add the node_port to your firewall on your router. Then run the following to make an HTTP request using the external IP address and the nginx node port.

curl -I http://${EXTERNAL_IP}:${NODE_PORT}

You can also visit the address in your browser!
http://${EXTERNAL_IP}:${NODE_PORT}

If you don’t have an external IP as such, you can still test this by running:

curl -I http://$(kubectl get po $POD_NAME --output=jsonpath='{.status.hostIP}'):$NODE_PORT#Result
HTTP/1.1 200 OK
Server: nginx/1.15.7
Date: Wed, 05 Dec 2018 19:12:43 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 27 Nov 2018 12:31:56 GMT
Connection: keep-alive
ETag: "5bfd393c-264"
Accept-Ranges: bytes

Untrusted workloads

This section will verify the ability to run untrusted workloads using gVisor.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: untrusted
annotations:
io.kubernetes.cri.untrusted-workload: "true"
spec:
containers:
- name: webserver
image: gcr.io/hightowerlabs/helloworld:2.0.0
EOF

Verification

kubectl get pods -o wide#Results
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
untrusted 1/1 Running 0 8s 10.200.2.3 worker-1 <none>

Get the node name where the untrusted pod is running:

INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath=’{.spec.nodeName}’)

SSH into the worker node and stay there for the next few commands:

ssh ${INSTANCE_NAME}

List the containers running under gVisor:

sudo runsc --root /run/containerd/runsc/k8s.io list

Get the ID of the untrusted pod:

POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock pods --name untrusted -q)

Get the ID of the webserver container running in the untrusted pod:

CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock ps -p ${POD_ID} -q)

Use the gVisor runsc command to display the processes running inside the webserver container:

sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}#Results
I1205 19:18:18.246021 6343 x:0] ***************************
I1205 19:18:18.246091 6343 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps 913e7a30935c45751f2bd6eb0551fe0de5ea01f775a11223d1f3e07f00168b7f]
I1205 19:18:18.246108 6343 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
I1205 19:18:18.246112 6343 x:0] PID: 6343
I1205 19:18:18.246115 6343 x:0] UID: 0, GID: 0
I1205 19:18:18.246119 6343 x:0] Configuration:
I1205 19:18:18.246121 6343 x:0] RootDir: /run/containerd/runsc/k8s.io
I1205 19:18:18.246124 6343 x:0] Platform: ptrace
I1205 19:18:18.246131 6343 x:0] FileAccess: exclusive, overlay: false
I1205 19:18:18.246135 6343 x:0] Network: sandbox, logging: false
I1205 19:18:18.246139 6343 x:0] Strace: false, max size: 1024, syscalls: []
I1205 19:18:18.246142 6343 x:0] ***************************
UID PID PPID C STIME TIME CMD
0 1 0 0 19:14 0s app
I1205 19:18:18.247116 6343 x:0] Exiting with status: 0

And finally…

You can exit the SSH connection now back to your main, remote PC that you’re running commands from.

Let’s clean up the cluster so you have a nice fresh one to start playing with.

kubectl delete secret kubernetes-the-bm-hard-way
kubectl delete po untrusted
kubectl delete svc nginx
kubectl delete deploy busybox nginx

I also recommend you remove the keys from the ~/ directories on the controller and worker nodes.

And you’re all done.

Conclusion

You’ve configured the cluster, tested it and are just plain great.

Next: Nothing, you’re done!

Unlisted

--

--