Private Container Registry on Kubernetes

David Dymko
May 24 · 5 min read
Image for post
Image for post
Photo by Alex Duffy on Unsplash

This guide is meant to help you configure a private container registry running on your Kubernetes cluster that is backed by an S3 backend.

What you will need:

  • Basic working knowledge of Kubernetes
  • A running Kubernetes cluster: We will using Kubernetes resources such as Load Balancers that require cloud provider support.
  • Basic working knowledge of Helm
  • Valid Domain

All of the instructions in this guide can be swapped out for your cloud provider of choice with minor changes. We will be using Vultr as our cloud provider for this guide.

If you wish to also use Vultr there is an open-source Terraform Module that Vultr provides called Condor which bootstraps a working cluster in a few minutes. To find out more visit

There are a few steps that are required to our private container registry backed by an s3 backend and they are as follows

  • Deploy and secure the Registry.
  • Configure our ingress route so we have a public way to connect to the registry.
  • Getting our local environment and Kubernetes to understand how to interact with the Registry.

Container Registry

Getting the Registry deployed is fairly straight forward with a simple helm chart. However, before we deploy the helm chart there are a few steps required.

Setting up Object Storage

You will need to create an S3 bucket and make sure you have the following information:

  • Bucket name
  • Region Endpoint
  • Region
  • Access Key
  • Secret Key

Registry Authentication

If you want to secure your registry so only authenticated users can only push/pull images from it we will need to set up basic authentication. We will use htpasswd to generate our username and password.

htpasswd -c auth ddymko

This will create a file called auth with my username ddymko and my password encrypted.

Registry Helm chart

The helm chart for the registry should look like the following

One thing to note if you are using Vultr. The region will have to be us-east-1

By running kubectl get po && kubectl get svc you will see a pod and a service running with a prefix of registry-docker-registry


Ingress Helm Chart

We will be going with Ingress-Nginx for our ingress controller. This will deploy an external Load Balancer, on the cloud provider we are currently running on, that will be our public-facing IP to our cluster.

The Helm configuration is as follows:

Note: If you are running on a cloud provider other then Vultr you will need to look up what configuration their ingress would support.

This will install 2 pods and 2 services.

Attaching Domain to Load Balancer

You may notice that the lb-ingress-nginx-ingress-controllerservice may have a EXTERNAL-IP of pending . This is because the Load Balancer is being deployed and it may take a minute or two to get its IP address.

Once theEXTERNAL-IPhas an IP address you will now need to attach your domain to the LB IP. With vultr you can navigate to and add a new domain with the LB IP which in our case is .


We want to secure our container with SSL. I will be using Lego to create my Lets Encrypt certs. If you want to use another method to create your SSL feel free do to do so.

To find your specific Cloud Provider configs visit

Running this will have created a certs folder in the working directory where we can grab our cert and key to create a TLS secret in Kubernetes.

kubectl create secret tls ssl — cert=certs/certificates/ — key=./certs/certificates/

With all of that done. The YAML for your ingress route should look like:

After deploying the yaml the container registry should be now running and accessible at . You should also get greeted with a prompt for your username and password to access the registry.

Pushing/Pulling images

Now that are registry is running and accessible we will need to set up our local machine along with Kubernetes to know how to push and pull images.


Getting docker setup locally with our private registry is fairly straightforward.

docker login

Now that we are logged in you are able to pull and push images to your registry.

➜  ~ docker pull alpine
Using default tag: latest
latest: Pulling from library/alpine
Digest: sha256:9a839e63dad54c3a6d1834e29692c8492d93f90c59c978c1ed79109ea4fb9a54
Status: Image is up to date for alpine:latest
➜ ~ docker tag alpine
➜ ~ docker push
The push refers to repository []
3e207b409db3: Pushed
v1.0.0: digest: sha256:39eda93d15866957feaee28f8fc5adb545276a64147445c64992ef69804dbf01 size: 528


Now in order to have Kubernetes be able to pull images from our registry we need to create registry credentials. This is done by taking our local docker authentication, which is located ~/.docker/config.json , and giving this to Kubernetes.

kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \

To inspect the credentials you can run the following:

kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode

Note: Your Docker credentials may be stored in a credstore which does not store them in the config.json. You will be required to set the credstore:"" in the config.json and run docker login again. More information can be found here.

Deploying a container

With your registry being accessible through your domain, being able to run docker login, and deploying your docker credentials to Kubernetes you are ready to deploy a container from your registry.

Below is a sample YAML that will pull the alpine image we pushed to our registry. It also defines imagePullSecrets which has our regcred secret so that Kubernetes is able to authenticate with our registry

You should see the pod get deployed successfully and if you run a kubectl describe alpine you will see Kubernetes log pulling the image from the registry in the Events section.

Wrapping up

Now with a private registry you can take things a step forward and set up CI/CD to automate image building, deployments, and more.

More information about private registries

Image for post
Image for post

Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬.

To join our community Slack team chat 🗣️ read our weekly Faun topics 🗞️, and connect with the community 📣 click here⬇

Image for post
Image for post

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇


The Must-Read Publication for Creative Developers & DevOps Enthusiasts

Sign up for FAUN


Medium’s largest and most followed independent DevOps publication. Join thousands of aspiring developers and DevOps enthusiasts Take a look

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

David Dymko

Written by

Gopher with a keen interest in cloud native



The Must-Read Publication for Creative Developers & DevOps Enthusiasts. Medium’s largest DevOps publication.

David Dymko

Written by

Gopher with a keen interest in cloud native



The Must-Read Publication for Creative Developers & DevOps Enthusiasts. Medium’s largest DevOps publication.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store