Understanding Kubernetes Networking — Part 4

Sumeet Kumar
Microsoft Azure
Published in
5 min readOct 20, 2020

This is the fourth and final post in the ongoing series on understanding Kubernetes Networking.

In this post, we will discuss on External to Service communication.

If you have missed Part 3 on pod to service communication, you can check it here.

4. External to Service communication

  • We can achieve access to the Services from External (Internet) via NodePort or via LoadBalancer [L4] or via Ingress Controller [L7].

Ingress Flow

  • Packet will be accepted on the Public IP of the LoadBalancer.
  • The load balancer will distribute the traffic across the VMs (nodes) within your cluster, using the defined port for your service.
  • Now, the packet is with “eth0” of node, which will be forwarded to bridge “cbr0” for appropriate routing.
  • But before we hand the packet to bridge, Iptables will perform a DNAT and packet is routed to the right POD.
  • The Pod will respond to the client with its IP, and conntrack helps in rewriting the IPs in the right way.
  • L7 LB’s [like Application Gateway] can redirect the traffic to URLs and paths.

Egress Flow

  • When you route traffic from a POD of your Kubernetes network to the internet, the packet starts from the POD’s namespace and goes to the root namespace through the veth.
  • Then it comes to the bridge “cbr0” from where it travels to the “eth0” of node because the IP it needs to go isn’t in connection with the bridge.
  • Now, packet goes through Iptables before hitting “eth0” of the Node .
  • LB will accept traffic from VM’s NIC, and our VM does not know about the IP space of PODs. So, the Iptables do a Source NAT and change the packet’s source.
  • Now it reaches LB where it goes through another NAT and then goes out to Internet.

Example:

  • Here, ServiceType:LoadBalancer, can be configured to get the traffic IN from Internet to your application, say — “nginx”.
  • Now, we crated small “nginx” application and exposed it via ServiceType: LoadBalancer:
  • If we now access the now access the Public LB IP from external browser:
  • and now, if we check our Iptables on the node, we will observe the rules being added on the node for same (by kube-proxy):
  • Since the rules are configured (by Kube-proxy), we can now access the “nginx” application from anywhere within cluster using the “ClusterIP” and externally using the “LoadBalancerIP”.

KUBE-SERVICES is the entry point for service packets. What it does is to match the destination IP:port and dispatch the packet to the corresponding KUBE-SVC-* chain.

KUBE-SVC-* chain acts as a load balancer and distributes the packet to KUBE-SEP-* chain equally. Every KUBE-SVC-* has the same number of KUBE-SEP-* chains as the number of endpoints behind it.

KUBE-SEP-* chain represents a Service Endpoint. It simply does DNAT, replacing service IP:port with Pod’s endpoint IP:Port.

[ Above excerpt is directly from: https://kubernetes.io/blog/2019/03/29/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/#pod-to-service ]

Now, let’s try to expose a service via Ingress-Controller.

  • We have created 2 LB Services for our two applications. [ Here, we have deployed it via LB Service, but in real world it would be better to deploy it via ClusterIP, to not to expose the application directly to Internet ].
  • We would expose our applications with Ingress-Controller, deployed via LoadBalancer as a Service.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: first.com
http:
paths:
- backend:
serviceName: hello-first ## This is the first app.
servicePort: 80
- host: sec.com
http:
paths:
- backend:
serviceName: hello-second ## This is the second app.
servicePort: 80
  • If we check the Ingress service:
  • I have made Host file entry to access the domains (as they are not registered):
  • Now, when we browse the application via respective hostname, we observe:

More Information and Further Reading

Azure Kubernetes Networking

Commands

  • List PODs on all namespaces: kubectl get pods — all-namespaces
  • List all namespace in Cluster: kubectl get namespace
  • Get all PODs in specific namespace (otherwise command will select “default” namespace): kubectl get pods -n <namespace> (or) kubectl get pod <pod name> -n <namespace> — output=yaml
  • List all PODs and their container (in a namespace): kubectl get pods -n <namespace> -o=custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name
  • Describe the POD: kubectl describe pod <pod name> -n <namespace>
  • Getting a shell if multiple containers: kubectl exec -i -t my-pod — container my-app — /bin/bash [https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/]
  • Run command on existing container(1 container inside POD): kubectl exec <pod name> — ls / [https://kubernetes.io/docs/reference/kubectl/cheatsheet/]
  • List all PODs on a node: kubectl get pods — all-namespaces -o wide — field-selector spec.nodeName=<node name>
  • List all PODs behind a Service: kubectl get endpoints <service name>

This brings to the end of the post and the series.

Hope you all liked it.

--

--