Planning a production ready kubernetes with fundamental Controllers & Operators — Part 3 — DNS & Service discovery

Haggai Philip Zagury
Israeli Tech Radar
Published in
7 min readMay 17, 2024

As we continue to build our production grade cluster, we should explore the central components of Kubernetes, one of it’s core features is service discovery which is achieved by using DNS, this sub-system is fundamental for understanding how applications communicate with one other, and are how they are made accessible externally. This part of the series will start by diving into the internal mechanisms of service discovery, and then expand on how Kubernetes integrates with external DNS systems for service accessibility which will also prepare us for part 4 which will discuss ingress traffic.ֿ

Service Discovery

Service discovery is a key functionality in Kubernetes, allowing services to locate each other and communicate within a cluster without hard-coding specific IP addresses. Kubernetes implements this through a DNS-based service discovery mechanism, predominantly managed by CoreDNS.

If we were to compare a cluster to a (distributed) computer, the local resolver on an given operating system is 127.0.0.1 kube-dns — which is the classic name for kubernetes dns is exactly that.

service discovery with coredns (a.k.a kube-dns)

CoreDNS is the recommended DNS server in Kubernetes and has replaced kube-dns. It is configured via a ConfigMap and is responsible for translating service names to IP addresses, enabling pods to dynamically discover and communicate with each other through their service names. When a service is created in Kubernetes, it is automatically assigned a DNS entry by CoreDNS. For instance, if a service named frontend is created in the default namespace, it can be accessed from any pod within the cluster using the DNS name frontend.default.svc.cluster.local

This internal DNS resolution is critical because it abstracts the complexity of network configurations and provides a consistent and straightforward method for service-to-service communication. CoreDNS can also be configured to handle more complex scenarios, such as stub domains, upstream nameservers, and custom DNS entries, making it a versatile tool within the Kubernetes ecosystem.

service discovery == a coreDns query

Kubernetes supports several types of services, which dictate how they are exposed both inside and outside the cluster. Here are the primary types:

  1. ClusterIP: This is the default service type, which gives the service a cluster-internal IP. This makes the service only reachable from within the cluster.
  2. NodePort: This type exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service routes, is automatically created. You can contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  3. LoadBalancer: This service type exposes the service externally using a cloud provider’s load balancer. It assigns a fixed external IP address to the service.
  4. ExternalName: Instead of providing methods to route traffic to the Pod, this maps the service to the contents of the externalName field (e.g., foo.bar.example.com).

DNS Records in CoreDNS

CoreDNS, configured via the Kubernetes plugin, automatically handles DNS records for services based on their types:

  • ClusterIP Services: CoreDNS creates A and SRV records for these services. The A record points to the ClusterIP of the service, allowing cluster-internal DNS resolution.
  • NodePort and LoadBalancer Services: Alongside the standard A record pointing to the ClusterIP, these services also get DNS entries that reflect their externally accessible endpoints. For LoadBalancer services, this includes A records pointing to the external IP addresses.
  • ExternalName Services: CoreDNS creates a CNAME record pointing to the external domain specified in the externalName field, which is useful for services that act as an alias to an external service.

Now that we’ve covered the internal service discovery and understand it is merely a dns resolution process we will now see how that plays well with external resolution integrating with, you guessed it another controller named ExternalDNS.

External Service Discovery Using ExternalDNS

While CoreDNS manages internal DNS resolution for workloads insider the cluster:

kubectl get svc -n kube-system kube-dns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 12d

We need an external resolver which is accessible by our end used enters ExternalDNS which extends these capabilities allowing registration of DNS records in an external dns provider, automating the management of external DNS records in accordance to the services or/and ingresses defined in the cluster. This is particularly useful for applications that need to be accessible from outside the cluster or the internet.

ExternalDNS watches the Kubernetes API for changes in services and ingresses and automatically updates DNS records in real-time. This means when you deploy a new service with an ingress that specifies a host, ExternalDNS will create the appropriate DNS record to point to the ingress controller, making the service accessible via a human-readable URL.

I will not focus on the EternalDNS configuration for it may vary between the different backend dns services, the integration is pretty straightforward and supports multiple DNS providers, including AWS Route 53, Google Cloud DNS, and others. This integration ensures that as services are scaled or redeployed across the cluster, their DNS records are automatically kept up to date, eliminating the need for manual DNS configuration and reducing the potential for human error.

The most common example on the necessity of ExternalDNS is that nowadays when cluster nodes come and go let’s take the a given service which moved from one node to another — this my indicate the nodePort and nodeIP to change which also requires some one to update the record in DNS — this is where ExternalDNS controller kick’s in …

An example usage of ExternalDNS

To illustrate how ExternalDNS works I would like to take 2 kubernetes resources a service and an Ingress

Consider a Kubernetes service resource configured with as type loadBalancer to expose a web application:

# create a simple deployment
kubectl create deployment --image traefik/whoami webapp

# expose the deployment with service of type loadBalancer
kubectl expose deployment webapp --port 80 --type LoadBalancer

# let's get the loadBalancer ip
kubectl get svc webapp
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
webapp LoadBalancer 172.20.158.151 a93bc7faf67614a3ab2663c211473d9d-1609469522.eu-central-1.elb.amazonaws.com 80:30637/TCP 106s

In the example above the loadBlancer returned by aws is a93bc7faf67614a3ab2663c211473d9d-1609469522.eu-central-1.elb.amazonaws.com which resolves to:

dig a93bc7faf67614a3ab2663c211473d9d-1609469522.eu-central-1.elb.amazonaws.com

; <<>> DiG 9.10.6 <<>> a93bc7faf67614a3ab2663c211473d9d-1609469522.eu-central-1.elb.amazonaws.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 33737
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1424
;; QUESTION SECTION:
;a93bc7faf67614a3ab2663c211473d9d-1609469522.eu-central-1.elb.amazonaws.com. IN A

;; ANSWER SECTION:
a93bc7faf67614a3ab2663c211473d9d-1609469522.eu-central-1.elb.amazonaws.com. 60 IN A 35.158.198.54
a93bc7faf67614a3ab2663c211473d9d-1609469522.eu-central-1.elb.amazonaws.com. 60 IN A 3.122.187.35
a93bc7faf67614a3ab2663c211473d9d-1609469522.eu-central-1.elb.amazonaws.com. 60 IN A 3.69.96.248

;; Query time: 47 msec
;; SERVER: 192.168.1.1#53(192.168.1.1)
;; WHEN: Thu May 02 09:57:59 IDT 2024
;; MSG SIZE rcvd: 151

Considering I already have externDNS installed if I wanted to route my domain to this service via webapp.infra.tikalk.dev I could annotate the service resource like so:

apiVersion: v1
kind: Service
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: webapp.infra.tikalk.dev
labels:
app: webapp
name: webapp
namespace: default
spec:
ports:
- nodePort: 30637
port: 80
protocol: TCP
targetPort: 80
selector:
app: webapp
type: LoadBalancer

If we wait 10–30 sec and let ExternalDNS do it’s magic we will find the following:

dig webapp.infra.tikalk.dev

; <<>> DiG 9.10.6 <<>> webapp.infra.tikalk.dev
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58512
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1424
;; QUESTION SECTION:
;webapp.infra.tikalk.dev. IN A

;; ANSWER SECTION:
webapp.infra.tikalk.dev. 293 IN CNAME a93bc7faf67614a3ab2663c211473d9d-1609469522.eu-central-1.elb.amazonaws.com.
a93bc7faf67614a3ab2663c211473d9d-1609469522.eu-central-1.elb.amazonaws.com. 54 IN A 3.69.96.248
a93bc7faf67614a3ab2663c211473d9d-1609469522.eu-central-1.elb.amazonaws.com. 54 IN A 3.122.187.35
a93bc7faf67614a3ab2663c211473d9d-1609469522.eu-central-1.elb.amazonaws.com. 54 IN A 35.158.198.54

;; Query time: 3 msec
;; SERVER: 192.168.1.1#53(192.168.1.1)
;; WHEN: Thu May 02 10:01:10 IDT 2024
;; MSG SIZE rcvd: 188

If you take a close look this provided us with a CNAME record pointing to the loadBalancer provisioned by our cloud provider (AWS).

Considering I didn’t discuss the ingress resource (yet) ill just show the same example as above which will behave the same but with an ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
external-dns.alpha.kubernetes.io/hostname: webapp.infra.tikalk.dev
spec:
rules:
- host: webapp.infra.tikalk.dev
http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test-service
port:
number: 80

In this example, ExternalDNS would automatically create a DNS record for webapp.infra.tikalk.dev pointing to the IP address of the ingress controller, allowing external users to access the webapp via webapp.infra.tikalk.dev.

You can find these files in the following git repository [ seeposts/PGK-part-3 ].

DALL-E | software discovery

In conclusion, DNS and service discovery in Kubernetes encompass both internal mechanisms managed by CoreDNS and external integrations through ExternalDNS. Together, they provide a comprehensive solution for dynamic service discovery and DNS management, crucial for modern, scalable, and resilient cloud-native applications.

In my next post in the series I will be focusing on Ingress traffic and routing, hope to hear your comments on this post and it’s continuation.

Yours sincerely, Haggai Philip Zagury.

--

--