Dnsmasq on kubernetes

Serving Dnsmasq as a DNS Service In Kubernetes with UDP LoadBalancer

Uğur Akgül
TurkNet Technology

--

Hi, in this post, we will be taking a look at the dnsmasq deployment in kubernetes. We will try to serve DNS with dnsmasq in kubernetes.

Main goal of this post is the same as the post Serving BIND DNS in Kubernetes that I wrote in 2020. We will be trying to serve DNS queries from kubernetes.

Prerequisites

  • A kubernetes cluster
  • Nginx as a separate VM

In my previous post, I was serving BIND DNS from NodePort 30053 because there was no UDP LoadBalancer option at that time, with this post we will be serving dnsmasq with a UDP LoadBalancer which you can apply to the BIND DNS deployment as well.

Why Dnsmasq ?

In TurkNet, we have recursive DNS systems. The performance of these systems are critical. Milliseconds in query responses are important. In busy hours, we must keep the low latency, highly available DNS system running. To achieve that, we are trying and comparing these DNS services with each other. Currently we are testing Dnsmasq and comparing it with BIND.

Why Kubernetes ?

We will be deploying dnsmasq on kubernetes. Why would you ? You might ask. Let me explain a bit for you.

Before kubernetes, we were deploying these DNS services in virtual machines and still using these services in virtual machines. Using these services in virtual machines can have benefits for many. But there are also downsides. I will be focusing on the downsides such as:

  • Virtual machine service bootstrap time
  • Virtual machine OS overhead
  • Virtual machine hardware overhead
  • Virtual machine hardware limitations
  • Virtual machines can not autoscale by themselves

Kubernetes gives us a lightweight system. We don’t deal with operating system loads. We are free from operating system-service compatibility. We can autoscale these deployments with Horizontal Pod Autoscalers.

All of the above are the pros of the kubernetes and cons of the virtual machines. You can populate the list. Also there is a technology trend and we have to catch up with technology.

In TurkNet we have “Edge Computing” project. That is a project which is trying to serve the DNS on the edge. To achieve that, to manage workloads on the edge, you must use lightweight systems. Kubernetes comes in handy accomplishing this task. Also we are taking advantage of autoscaling in kubernetes to distribute the workload on the edge.

So that’s basically why we will be using kubernetes in our dnsmasq deployment. Let’s start our deployment.

Creating the Dnsmasq Deployment

First we will create our dnsmasq deployment. We will be deploying our dnsmasq in a namespace called dnsmasq.

NOTE: We will be using this container image for the deployment.

DNSMasq deployment

Creating Dnsmasq Service

We will need a service to expose our deployment. We will be exposing our deployment through NodePort 30053 -seems familiar huh :)- Also this service will reside in our dnsmasq namespace.

DNSMasq service

Deploying Dnsmasq Pod and Service

First we will need to create a namespace called “dnsmasq” with below command:

kubectl create ns dnsmasq

After that we are good to go with our deployment with below command:

kubectl apply -f dnsmasq-deployment.yml

After the deployment, we can now create our service with below command:

kubectl apply -f dnsmasq-service.yml

After these commands, we can control our deployment like below:

Checking our kubernetes deployment

As you can see, we have a deployment named dnsmasq-deployment and this deployment has 1 pod named dnsmasq-deployment-584b95fc5d-6ncvm and we have a service named dnsmasq-service.

We can see that our service is binding NodePort 30053 with pod port 53 which means any traffic that is coming from NodePort 30053 will go to pod port 53, where our dnsmasq application is running.

NOTE: This traffic will be UDP, so you can not test it with telnet. You can use netcat or make some dns queries.

Although this is sufficient, in real life we don’t want to indicate any ports when making a DNS query. In this state, any DNS query will be made to port 53 and thus, will not be recieved from our application resulting a “connection timeout, no server could be reached” error.

To test this deployment you can simply use:

nslookup -port=30053 medium.com <your-kubernetes-worker-ip>

As you can see above, we need to indicate the port in our DNS query. This is not useful for most of the cases. We need to be able to serve our DNS on port 53.

Let’s add an UDP LB to our setup for this purpose.

Adding UDP LoadBalancer

We will be using NGINX UDP LoadBalancer for our case. Our UDP packets’ traffic will be like below.

UDP packet flow

One thing to notice here is that our DNS query response will be returned from NGINX LB, not from kubernetes. Thus our backend will be hidden from the end user.

NOTE: At this step, we are expecting an installed NGINX as a separate VM.

With NGINX installed, we can configure NGINX to loadbalance and serve our DNS queries.

Let’s create a file called udp-lb.conf under /etc/nginx/conf.d/ like below:

/etc/nginx/conf.d/udp-lb.conf

# Load balance UDP‑based DNS traffic across two servers
stream {
upstream dns_upstreams {
server <your-kubernetes-worker>:30053;
server <your-kubernetes-worker>:30053;
server <your-kubernetes-worker>:30053;

}

server {
listen 53 udp;
proxy_pass dns_upstreams;
proxy_timeout 1s;
proxy_responses 1;
error_log /var/log/dns.log;
}
}

With this configuration, our NGINX is now ready to accept UDP packets and send them to the designated upstream servers which are our kubernetes workers.

NOTE: You need to create /var/log/dns.log file to separate logs.

Action Time!

After all the work is done, we can finally test our setup. Because we have an UDP LB set up in place, in this test we don’t need to provide a port in dns query. We can simply test our setup with below command.

nslookup medium.com <your-nginx-server-ip>

If everything is in place, and there is no firewall, iptables rules to mess with, this command should successfully return the query results.

Closing Thoughts

In this post we’ve configured a dnsmasq deployment and configured an NGINX service to loadbalance UDP packets(queries). In addition to our configuration, an HPA configuration can be added for autoscaling.

Thank you for reading and have a good time!

Let me know if you have any questions, I will be happy to help :)

--

--

Uğur Akgül
Uğur Akgül

Written by Uğur Akgül

Tech Lead, Platform Engineering @TurkNet // You can find me at https://www.linkedin.com/in/hikmetugurakgul/

Responses (2)