Serving BIND DNS in Kubernetes

Uğur Akgül
The Startup
5 min readDec 2, 2020

--

Hi, in this story I will explain about serving a BIND DNS service (BIND container) in kubernetes. I will try to serve a DNS service -in this case DNS resolving pod- and try to get a result from that pod.

In today’s life, some of us are using -or serving- DNS services with BIND. Up until recently, I was serving BIND DNS service with VM’s. That means, for every load, I have to create a new VM and configure BIND to serve correctly. As you might know, creating a VM and configuring it’s applications can take time, even with automation tools like Ansible, Chef or Puppet. This brings up the question: Can I reduce that time ?

To answer that question; yes I can reduce it with containers :)

Enter The Container Era

The containers, the magic of our age. Everything has to be a container now! Of course we are not like that, we are just trying some new things and curious about what we can achieve with this technology. If we can do our usual tasks easier with containers, we will use it to save time and effort.

With this ideology in mind, I was thinking about a container BIND DNS. If I can achieve this, I can bring up more and more containers and exponantially grow my DNS service; or I can bring down if I don’t need that much service power at the time and save my resources.

To distribute my service load, I need a Load Balancer. To bring up more containers based on my service load or keep my service up all the time, I need High Availability. These features tells me that I need kubernetes and not docker compose.

Kubernetes The Easy Way ?

After I decided which orchestrator — kubernetes — for my containers to use, I installed and configured a kubernetes cluster with only 1 master and 1 worker node just for testing. (I used Ubuntu, it saves so much time in bootstrapping a cluster)

Now to create a BIND container, I must have a BIND container image. I searched internet for a good image and thankfully come across this angel’s good good work. https://github.com/cytopia/docker-bind This was what I needed. After reading some documentation and trying by myself, I finally manage to fire up a single container with docker run, used this image, and got query results.

Then I thought, if I did run this container successfully, I can run it in kubernetes.

Creating Our Kubernetes Deployment

I needed a deployment to start serving DNS because I don’t want my DNS service to be down. Never. With this way I will know that there is always at least a pod that’s serving my DNS container.

Let’s take a look at our bind-deployment.yml

Create this bind-deployment.yml file and execute below command,

kubectl create -f bind-deployment.yml

now let’s take a look at our pods,

kubectl get pods -o wide
kubectl get pods command output

our pod is up and running on the worker node. We can get it’s logs just like in docker.

kubectl logs <pod_name>
Logs for a specific pod

A side note: I am getting BIND app logs from pod’s logs just because I used DOCKER_LOGS=1 in my deployment file. See github link for more information.

After a successful deployment, we need to create a service to expose our pods to the external network. If we don’t create a service, we can not reach the BIND DNS container inside our pod.

Our service will connect it’s port 53 to the pod’s port 53 and will forward UDP packets through host’s port 30053 to container’s port 53, thus our BIND application.

I am using service type NodePort to access the service from external network and using host port 30053 to bind inside the BIND container. Because I am using host port 30053 all queries must come to this port.

PS: You can use any port you like as long as it’s in the NodePort range

Our bind-service.yml will look like this.

Let’s create our service,

kubectl create -f bind-service.yml

and let’s check if our service has been created,

kubectl get services
kubectl get services command output

as we can see, our service has been created. Look at the ports section. It’s port 53 has been bound the host’s port 30053.

Moment Of Truth

After all the work - writing yaml files,creating deployments and services- now we can test if our DNS service is working.

Our goal was here to serve a DNS service from inside a kubernetes cluster. To test this we can nslookup to the IP where our master or worker node resides. An important note here would be to notice that default nslookup port is 53 -because DNS is using port 53 via UDP- we must use our service’s node port which was 30053.

To test our DNS service,

nslookup -port=30053 medium.com <kubernetes_master_or_worker_IP>

or if you have your hosts file configured with the IP addresses of these nodes you can use hostnames instead.

And voilà! We got our nslookup result from the worker node.

In fact, our pod right here can confirm us with it’s logs. Now we can check our pod’s logs like we did earlier.

kubectl logs <pod_name>
kubectl logs for our pod

As you can see from the logs, our query response indeed came from our container.

If you add a DNS_FORWARDER in your deployment like an environment variable as mentioned in the github page, you can use this service as a cache DNS server.

As of now, I didn’t think of any solution or work-around to get over NodePort 30053. Kube-proxy is usually used to expose pods’ ports but pods can die and be replaced so you need kube-proxy to a service and this is not a feature in kube-proxy, at least not yet.

If you have any ideas about improvement please let me know. And if you have any questions you can contact me. Thank you for your time.

And the most important part is STAY HOME & STAY SAFE :)

--

--