Simplify Your Local Helm Workflow with a Local Docker Registry
In the world of containerization and orchestration, Helm and Docker are indispensable. Helm aids in managing Kubernetes applications, whereas Docker facilitates the creation and management of containers. However, while learning and experimenting with these tools, developers often find themselves navigating through the hoops of pushing Docker images to a repository like Docker Hub. Today, we will streamline the process, enabling you to work with Helm and a local Docker registry, eliminating the need to push images to Docker Hub.
The Dilemma with Docker Hub
Many instructional guides and videos inadvertently presuppose the use of Docker Hub for pushing Docker images. While Docker Hub is a widely-used platform, utilizing it for learning, experimenting, or testing — especially with Helm, which might involve pushing multiple images at once — presents a hurdle. To push numerous images, a Docker Hub subscription becomes a prerequisite, entailing additional costs and complexities for developers simply wishing to learn and explore.
Quick Guide to Helm and Local Docker Registry Setup
Setting Up the Local Docker Registry
Initiate by running the Docker Compose command, which will set up your local Docker registry. Use the docker-compose.yaml
provided in the GitHub repository.
docker compose -f docker/registry/docker-compose.yaml up -d
Confirm the registry is active by visiting: http://localhost:5000/v2/_catalog
.
Building and Pushing Images to Local Registry
Modify the Docker Compose file to reference the local registry. For instance:
version: "3"
services:
api:
image: localhost:5000/docker-web-api:latest
# ... other configurations remain unchanged
web:
image: localhost:5000/docker-web-app:latest
# ... other configurations remain unchanged
Next, build and push your Docker images:
docker compose build
docker compose push
Validate that the images are correctly pushed by revisiting: http://localhost:5000/v2/_catalog
. You should see:
{
"repositories": [
"docker-web-api",
"docker-web-app"
]
}
Deploying with Helm
With the images pushed to your local registry, you can now proceed to deploy your application using Helm. Run the following command:
cd k8s/local-app
kubectl create namespace local-app
helm install local-helm-release . --namespace local-app
We can check our local cluster dashboard with Rancher Desktop and see the following.
If you don’t have Rancher Desktop installed, here is a blog with a well written install instruction blog written by Colin Griffin Blog Link. If you want to know why you should replace Docker Desktop with Rancher Desktop, read my recent blog here as well Blog Link.
Anyway, to validate the deployment, navigate to http://localhost:30000
, make sure that and input http://nx-express-api:3333/api
. You should observe the expected result. If you’re curious about where this ui is coming from check out this recent blog post of mine docker compose blog
Explanation of Configuration
It’s vital to understand some key configurations. Let’s dissect a segment:
api:
name: nx-express-api
image: localhost:5000/docker-web-api:latest
# ... other configurations
web:
name: nx-react-ui
image: localhost:5000/docker-web-app:latest
port: 5550
nodePort: 30000
# ... other configurations
In this snippet from values.yaml
, notice that nx-express-api
and nx-react-ui
become our deployment names, effectively acting as our host names within the Kubernetes cluster.
The 30000
node port is defined in the service.yaml
file and in the values.yaml
file, binding the service to a port on our local machine, which is crucial for local testing.
Conclusion
This succinct guide and the accompanying GitHub repository present a simplified, DockerHub-independent approach for learning and working with Helm and Docker locally. Utilizing a local Docker registry allows for a seamless, cost-free exploration and testing environment, enabling developers to grasp and master the functionalities of Helm and Docker without unnecessary external dependencies.
A word of caution to always consider security implications when employing these methodologies in a production scenario, and always opt for a secure, production-ready solution for real-world applications.
May your Helm and Docker journey be smooth sailing!