Exposing TFTP Server as Kubernetes Service — Part 1

Darpan Malhotra
6 min readMay 26, 2022

--

If you are here, you already know that Kubernetes is the most popular system for deploying containerized applications. Kubernetes not only solves the problem of orchestrating containers across nodes but also exposes network service offered by your containerized application, within and outside the cluster. When you begin the journey of deploying and exposing containerized applications using Kubernetes, HTTP network service would be the most common example used in most of the literature on Internet.
But there are some other network services, that do not get exposed that easily — TFTP being one such network service. This series of articles is about sharing the lessons I learnt about Kubernetes networking while exposing TFTP server pods as a Kubernetes service on an on-prem cluster. Following 3 service types will be covered in this series:
a. ClusterIP
b. NodePort
c. LoadBalancer

TFTP (Trivial File Transfer Protocol) is client-server architecture based protocol to transfer files using UDP, where server listens on port 69. This protocol is typically used to transfer firmware or configuration files to devices (clients).

Typical Client-Server exchange to READ a file

My journey of exposing a TFTP server in a high-performant way on Kubernetes is long. In this article (Part 1), we will just prepare our setup, which will be used to expose TFTP server pod as Kubernetes service in future articles.

1. Setting up an on-prem Kubernetes cluster

I started with a Kubernetes cluster (v1.24) with 3 nodes: 1 Control-Plane + 2 Workers. CRI used is containerd.

The cluster is bootstrapped using kubeadm without any special config.

# kubeadm init --pod-network-cidr=192.168.0.0/16 --service-cidr=10.96.0.0/16

Listing the nodes:

It is evident that all nodes are CentOS 7.7 machines with 3.10.0–1062.1.2.el7.x86_64 kernel.

2. Setting up CNI

Initially, I chose Calico as CNI plugin for this cluster. The installation instructions are provided by Project Calico. Download the Calico networking manifest:

# curl -OL https://projectcalico.docs.tigera.io/manifests/calico.yaml

At the time of writing this article, v3.22.2 is latest version of Calico CNI.

Apply the manifest using the following command:

# kubectl apply -f calico.yaml

Verify that Calico pods are running (1 Deployment of calico-kube-controllers and 1 DaemonSet of calico-node).

To know about Calico’s configuration, let us download calicoctl tool on control-plane node.

# curl -L https://github.com/projectcalico/calico/releases/download/v3.22.2/calicoctl-linux-amd64 -o calicoctl
# chmod +x ./calicoctl

By default, Calico uses Kubernetes pod CIDR as IP Pool. In step 1, we used 192.168.0.0/16 as pod CIDR. Let us confirm the same using calicoctl.

Calico uses IPAM plugin (calico-ipam) to dynamically break down this /16 pool into /26 blocks and allocates those blocks to each node. Further, pods running on a node are assigned IP address from the block allocated to that node. Let us explore the IP address block allocation done be calico-ipam.

By default, Calico uses BGP to form a routing mesh between all nodes of the cluster and IP-in-IP encapsulation for pod-to-pod communication.

3. Deploy TFTP Server and Client

As the on-prem cluster is ready with CNI configured, it is time to deploy TFTP server pod. A deployment is created to run TFTP server pod. Note that it exposes TFTP service on UDP port 69. The manifest file for TFTP server deployment is:

The TFTP server is configured to serve only read requests (RRQ) for a file. This file is named dummy.txt.

# ls -l dummy.txt 
-rw-r — r — 1 root root 2016 May 8 02:37 dummy.txt

This test file contains text from RFC 1350. Note the size of file is 2016 bytes. So, a successful file transfer would comprise of 4 blocks exchange (512 + 512 + 512 + 480 bytes).

To verify TFTP server’s functionality, a TFTP client is needed. We will use Ubuntu container and install TFTP client in that container. The manifest file for TFTP client deployment is:

Deploy server and client pods.

# kubectl apply -f tftp-server-deployment.yaml -f tftp-client-deployment.yaml
deployment.apps/tftp-server created
deployment.apps/tftp-client created

Now, we have two pods running, where client and server are running on different nodes. Let us exec into the tftp-client-65f8f78d87-xmf8jcontainer and install tftp client utility.

4. How Calico connects Pod namespace to Host

Each pod on a node has its own network namespace i.e. whole networking stack including interface, IP address, route table etc. Let us list all network namespaces on worker nodes.

On the node learn-k8s-2 (where server runs):

# ip netns list 
cni-479464cd-fbaf-be67–3aec-65400191dc74 (id: 1)

Listing the interfaces in this namespace (id: 1)

Listing the interfaces in the root network namespace of the node learn-k8s-2:

It is evident that eth0interface of the server container and calic0bf1043683 interface of node form a veth pair. One end of veth pair is in pod’s network namespace and another end in root network namespace of the node. The beauty of veth pair is, packets transmitted on one device in the pair are immediately received on the other device. That is how a pod gets connectivity to root network namespace of the node.

Same can be seen on the node learn-k8s-3 (where client runs):

# ip netns list 
cni-51c08dbb-b08d-0e38–405e-fb121e8f98d9 (id: 2)
cni-b7e6735c-5c48–432d-b465–2e598ce5a022 (id: 0)

Listing the interfaces in this namespace (id: 2)

Listing the interfaces in the root network namespace of the node learn-k8s-3:

It is evident that eth0interface of the client container and califa5ffa0a589 interface of node form a veth pair. Also, note the presence of tunl0interface on all nodes. This interface is created by Calico and its role will be discussed in the next article.

In this article, we created a 3 node on-prem Kubernetes cluster with Calico CNI. After deploying TFTP server and client pods, we inspected network interfaces in pod and root namespaces on the nodes. At this point, the setup is just right to expose TFTP server pod as Kubernetes service.

In the next article, will discuss about exposing TFTP server pod as ClusterIP service and how pod-to-service communication is achieved.

--

--

Darpan Malhotra

4x AWS Certified including Advanced Networking — Speciality