Deploying Docker Registry on Kubernetes
In a previous post we set up a Bare Metal Kubernetes cluster, and we followed it up with another post showing how to deploy a Redis cluster on it. Now we will deploy a Docker Registry on our our cluster.
Why deploy a Docker Registry on a cluster? Sometimes I like to run a cluster with custom images that only run on that particular cluster. In other cases we may want to air-gap the cluster and thus have a docker registry available where each cluster node can get the images instead.
Thankfully, running a Docker Registry on Kubernetes is not too difficult to pull off. To keep this guide simple and focused, we will deploy a Docker Registry with a self-signed certificate. After doing this we can always add user authentication and/or LetsEncrypt certificates later.
Create Key and Certificate
As described in the Docker guide, create a new self signed certificate as follows:
$ mkdir -p certs
$ openssl req \
-newkey rsa:4096 -nodes -sha256 -keyout certs/registry.key \
-addext "subjectAltName = IP:10.211.55.250" \
-x509 -days 3650 -out certs/registry.crt
The important bit here is that we set the subjectAltName
to an IP number our MetalLB load balancer will assign. In our cluster we will assign 10.211.55.250
to the Docker Registry service (see later on).
Store Key and Certificate in Cluster
The nice thing of Kubernetes is that it allows you to store secrets and make them accessible to all nodes safely. Let’s store the created key and certificate on our cluster now:
kubectl create secret tls registry-cert \
--cert=certs/registry.crt \
--key=certs/registry.key \
-n test
The above command will store registry.crt
and registry.key
in a TLS secret called registry-cert
in the namespace test
. We will be deploying our Docker Registry in this namespace.
Create Persistent Volume for Docker Registry
It is a good idea to use a persistent volume for the docker images stored in our registry. This way if the Pod in which our registry runs gets rescheduled to another node, we will not lose all the images we stored in it.
Create a file called registry-pvc.yaml
with the following:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: registry-data-pvc
namespace: test
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 50Gi
The above will create a persistent volume claim on Longhorn for 50G in the test
namespace. Create it now:
$ kubectl create -f registry-pvc.yaml
Deploy Docker Registry on Cluster
Create a deployment descriptor called registry-deployment.yaml
with the following contents:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: registry
name: registry
namespace: test
spec:
replicas: 1
selector:
matchLabels:
run: registry
template:
metadata:
labels:
run: registry
spec:
nodeSelector:
node-type: worker
containers:
- name: registry
image: registry:2
ports:
- containerPort: 5000
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/tls.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/tls.key"
volumeMounts:
- name: registry-certs
mountPath: "/certs"
readOnly: true
- name: registry-data
mountPath: /var/lib/registry
subPath: registry
volumes:
- name: registry-certs
secret:
secretName: registry-cert
- name: registry-data
persistentVolumeClaim:
claimName: registry-data-pvc
Here we configure one pod to create a container fromregistry:2
image using the TLS key and certificate we created earlier. We tell the container that it can find those under the /certs
directory. How do they get there? For this we use volume mounts. We define a read-onlyvolumeMount
with the name registry-certs
to be mounted under /certs
in the container. This registry-certs
volumeMount references the secret volume registry-cert
in the volumes
section.
We also mount the persistent volume claim registry-data-pvc
under directory /var/lib/registry
in the container.
Go ahead and deploy the registry on our cluster as follows:
$ kubectl create -f registry-deployment.yaml
You should now be able to see the registry pod running on the cluster in the namespace test
. You can check this as follows:
$ kubectl -n test get all
Create Registry Service
Now that our registry pod is up and running, we need to define a service for it so that our pods can access it. Remember the IP number we used when generating the key and certificate? We will use it now. Create a service descriptor called registry-service.yaml
with the following content:
apiVersion: v1
kind: Service
metadata:
name: registry-service
namespace: test
spec:
type: LoadBalancer
selector:
run: registry
ports:
- name: registry-tcp
protocol: TCP
port: 5000
targetPort: 5000
loadBalancerIP: 10.211.55.250
This will instruct the load balancer to expose port 5000
on IP 10.211.55.250
. Go ahead and create the service:
$ kubectl create -f registry-service.yaml
Great! now you should be able to access the registry. Let’s test it out:
$ curl --cacert certs/registry.crt \
https://10.211.55.250:5000/v2/_catalog
Load Certificate in ca-certificates
Since we are dealing with a self signed certificate, we need to instruct the host operating system to trust this certificate. Normally you would do that with a Certificate Authority (CA) certificate, but since this is self signed, the certificate is both the certificate for the registry service, as well as its own CA. So let’s load the registry.cert
as a new CA on all the cluster nodes. We need to do this because the k3s service on all the cluster nodes will need to be able to pull from this repository, and it must trust this certificate. We can easily do this using Ansible:
Note that we are using a hosts
file here that we defined in our previous blog post Bare Metal Kubernetes.
With the certificate now loaded on each cluster node, we will need to restart the k3s services so that it picks up the change. First restart the worker nodes as follows:
ansible -i hosts workers -b -K -m shell \
-a "systemctl restart k3s-agent"
Next restart the control (a.k.a master) nodes:
ansible -i hosts control -b -K -m shell \
-a "systemctl restart k3s"
Test it Out!
With the workers and controllers restarted, let’s test out our new registry! Create a new image tagged with our new registry as follows:
$ docker pull alpine
$ docker tag alpine 10.211.55.250:5000/alpine:latest
$ docker push 10.211.55.250:5000/alpine:latest
If all went well the retagged alpine image should now be in our local docker registry on the cluster. Let’s now try to deploy a pod with this new image. Create a file called alpine-test-pod.yaml
with the following content:
apiVersion: v1
kind: Pod
metadata:
name: alpine-test
namespace: test
spec:
containers:
- name: alpine
image: 192.168.137.250:5000/alpine:latest
command: ["sleep", "60s"]
restartPolicy: "Never"
Deploy it:
$ kubectl create -f alpine-test-pod.yaml
Check if the deployment went well:
$ kubectl -n test describe pod alpine-test
If all went well the pod should have been created and started up successfully! Now you can store images ‘locally’ on your cluster!