Configuring Local Ingress Domains for your Kind Cluster with MetalLB, Dnsmasq and Ingress-Nginx

Andrés Cabrera
8 min readApr 14, 2023

--

Hello everyone! If you’re running a Kubernetes cluster locally with Kind, you know how convenient it is for developing and testing applications without the complexity of a cloud deployment. However, accessing those apps through an ingress controller can be a challenge, especially when setting up local ingress domains. But fear not! With the assistance of MetalLB and Dnsmasq, we can simplify this process. So, let’s explore how to configure local ingress domains for your Kind cluster using these powerful tools, along with the Ingress-Nginx-Controller. Grab a coffee and let’s get started!

Alrighty, first things first — we need to get Dnsmasq up and running. This handy tool lets us map our domain names to the IP addresses of our Kubernetes services, making it a crucial component for setting up local ingress domains.

What is dnsmasq and how can i use it?

Dnsmasq is a tool used for mapping domain names to IP addresses, which simplifies network management tasks. It is lightweight and often used as a DNS forwarder and DHCP server. In Kubernetes, Dnsmasq can be used to map domain names to the IP addresses of Kubernetes services, making it easier for developers to access and test their applications in a local cluster.

To install Dnsmasq on Ubuntu, simply run the following command in your terminal:

sudo apt install dnsmasq

Contrary to some articles, uninstalling systemd-resolved on Ubuntu is not necessary for using Dnsmasq. In fact, you can configure them to work together without any conflicts. To get started, disable Dnsmasq from auto-starting by running the command sudo systemctl disable dnsmasq . Next, create an rc.local file to start the Dnsmasq daemon instead of systemd-resolved. This is a straightforward process, simply execute the command sudo nano /etc/rc.local to create and open the rc.local file, and add these lines to it:

#!/bin/bash
service systemd-resolved stop
service dnsmasq start

After configuring Dnsmasq and systemd-resolved, restart your computer to activate the changes.

Configuring Dnsmasq is easy! The main configuration file is located at /etc/dnsmasq.conf , and we only need to add a couple of lines to it:

bind-interfaces
listen-address=127.0.0.1
server=1.1.1.1
server=1.0.0.1
conf-dir=/etc/dnsmasq.d/,*.conf

Finally, we can put additional configuration in /etc/dnsmasq.d/ and dnsmasq will pick it up when starting.

To configure Dnsmasq for your Kind cluster, create a file (e.g. kind.k8s.conf) and add the following line:

address=/kind.cluster/127.0.0.1

This line maps the local domain name kind.cluster to the IP address 127.0.0.1, which is the IP address of your local host.

Save the file and place it in the /etc/dnsmasq.d/ directory. This will ensure that the configuration file is loaded by Dnsmasq on startup.

That’s it! With this configuration file in place, you can now access your Kubernetes services using the domain name kind.cluster. Give it a try and see how easy it is to set up local ingress domains for your Kind cluster using Dnsmasq!

To test that your Dnsmasq configuration is working correctly, you can use the dig command in your terminal. Simply run the following command:

$ dig test.kind.cluster

; <<>> DiG 9.18.12-0ubuntu0.22.04.1-Ubuntu <<>> test.kind.cluster
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 63266
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;test.kind.cluster. IN A
;; ANSWER SECTION:
test.kind.cluster. 0 IN A 127.0.0.1
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Fri Apr 14 11:40:42 CEST 2023
;; MSG SIZE rcvd: 62

Now that we have a synthetic domain that routes properly using Dnsmasq, we can proceed to deploy and configure our Kind cluster.

Creating the Kind cluster

Before we proceed with creating the Kind cluster, we need to ensure that we have Kind installed and that we’re using the latest version. This will ensure that we have access to the latest features and improvements.

To install Kind, you can refer to the official Kind documentation for instructions. Alternatively, you can use the following commands to install Kind on Ubuntu:

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.18.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

Now that we have Kind installed and up to date, it’s time to create our Kind cluster and start testing our apps with local ingress domains. Don’t worry, it’s easy to follow along with the official Kind documentation.

Let’s take a closer look at some of the configurations. First up, we’ll create a YAML configuration file that includes port mappings for both HTTP and HTTPS. This will allow us to test our apps using the local ingress domains. Check out the YAML code below:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
listenAddress: 127.0.0.1
protocol: TCP
- containerPort: 443
hostPort: 443
listenAddress: 127.0.0.1
protocol: TCP
- role: worker
- role: worker

This YAML file specifies a cluster with one control plane and two worker nodes. The control plane node is configured with extra port mappings for ports 80 and 443, which are used for HTTP and HTTPS traffic respectively. The listenAddress is set to 127.0.0.1 to ensure that traffic is only accepted from the local host. We also set a label on the control plane node that we'll use later to deploy our Ingress-Nginx-Controller.

Using this configuration, we can create a cluster.

kind create cluster --config kind-config.yaml --name cluster01

Once the Kind cluster is up and running, it’s important to verify that ports 80 and 443 are bound properly.

$ docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d626cfd2187 kindest/node:v1.26.3 "/usr/local/bin/entr…" 2 hours ago Up 57 minutes cluster01-worker2
aa2cf78c1e57 kindest/node:v1.26.3 "/usr/local/bin/entr…" 2 hours ago Up 57 minutes 127.0.0.1:80->80/tcp, 127.0.0.1:443->443/tcp, 127.0.0.1:36649->6443/tcp cluster01-control-plane
3b1438e6aa8d kindest/node:v1.26.3 "/usr/local/bin/entr…" 2 hours ago Up 57 minutes

Installing MetalLB using default manifests

To install MetalLB using the default manifests, you can follow these simple steps:

First, apply the MetalLB native manifest by running the command:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

Next, wait until the MetalLB pods (controller and speakers) are ready by running:

kubectl wait --namespace metallb-system --for=condition=ready pod --selector=app=metallb --timeout=90s

To complete the layer2 configuration, you need to provide MetalLB with a range of IP addresses it controls, which should be on the docker kind network. To find the IP address range, run:

docker network inspect -f '{{.IPAM.Config}}' kind

The output will include a CIDR such as 172.18.0.0/16. To configure MetalLB to use the 172.18.255.200 to 172.18.255.240 IP range, create an IPAddressPool and the related L2Advertisement using the following YAML manifest:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: example
namespace: metallb-system
spec:
addresses:
- 172.18.255.200-172.18.255.240
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: empty
namespace: metallb-system

That’s it! You’ve now installed MetalLB using default manifests and configured the IP address range for load balancers.

Setting Up An Ingress Controller

To set up the Ingress-Nginx-Controller, apply the following YAML manifest:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

This manifest contains Kind-specific patches to forward the hostPorts to the ingress controller, set taint tolerations, and schedule it to the custom labeled node.

Once applied, wait for the Ingress to be ready to process requests by running:

kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90s

Your Ingress-Nginx-Controller is now set up and ready to process requests.

Putting it all together

In this YAML configuration example, we’ll create two pods with HTTP echo, a LoadBalancer service, and an ingress resource that maps a hostname to the LoadBalancer service.

kind: Pod
apiVersion: v1
metadata:
name: devops-app
labels:
app: http-echo
spec:
containers:
- name: devops-app
image: hashicorp/http-echo:0.2.3
args:
- "-text=Hello devops!"
---
kind: Pod
apiVersion: v1
metadata:
name: kind-app
labels:
app: http-echo
spec:
containers:
- name: kind-app
image: hashicorp/http-echo:0.2.3
args:
- "-text=Hello kind!"
---
kind: Service
apiVersion: v1
metadata:
name: lb-service
spec:
type: LoadBalancer
selector:
app: http-echo
ports:
- port: 5678
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: lb-ingress
spec:
rules:
- host: lb.test.kind.cluster
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: lb-service
port:
number: 5678

This configuration creates two pods with HTTP echo: devops-app and kind-app. It also creates a LoadBalancer service named lb-service and an ingress resource named lb-ingress. The ingress resource maps the hostname lb.test.kind.cluster to the LoadBalancer service on port 5678.

With this configuration, traffic to lb.test.kind.cluster will be directed to the LoadBalancer service, which will distribute traffic to the pods running the HTTP echo app. This provides a convenient and easy-to-use way to test and develop applications locally on your Kind cluster.

The following command is used to test the load balancer service created in the previous example:

for _ in {1..10}; do
curl lb.test.kind.cluster
done

The output shows that the HTTP echo pods are both running and responding properly to requests sent to the LoadBalancer service through the ingress resource. Each request to the hostname “lb.test.kind.cluster” is randomly directed to one of the pods, and the response includes the message “Hello devops!” or “Hello kind!”, indicating which pod responded to the request. This demonstrates the successful setup of local ingress domains for your Kind cluster using MetalLB, Dnsmasq, and the Ingress-Nginx-Controller.

for _ in {1..10}; do
curl lb.test.kind.cluster
done

Hello kind!
Hello devops!
Hello kind!
Hello devops!
Hello kind!
Hello devops!
Hello devops!
Hello kind!
Hello devops!
Hello kind

This is an example of how the application can be accessed from the Chrome browser.

That’s it for this tutorial on configuring local ingress domains for your Kind cluster using MetalLB and Dnsmasq. We hope you found it informative and easy to follow along.

By using these tools, you can now easily access and test your applications through an ingress controller in your local Kubernetes environment without any hassle. And with the help of Ingress-Nginx-Controller, you can easily manage and configure your ingress resources.

So go ahead and give it a try for yourself! Play around with the configurations, experiment with different settings, and see what works best for your needs. With the power of Kubernetes and these powerful tools at your fingertips, the possibilities are endless.

Thanks for reading, and happy devops!

Source code

--

--

Andrés Cabrera

Growth-oriented collaborator experienced in systems administration, embracing DevOps philosophy for continuous learning and finding opportunities in mistakes.