Amazon VPC CNI for EKS

Overview

Venugopal Krishnappa
Engineered @ Publicis Sapient
4 min readApr 25, 2022

--

Kubernetes is a leading container orchestration system for automating software development. Network configuration is a complex part of Kubernetes. A container network interface (CNI) allows you to easily configure a container network when creating or destroying a container. The industry has developed various CNI plugins. This article describes Amazon VPC CNI plugin for EKS (a leading Kubernetes-managed service) and a list of available plugins.

Topics discussed in this article:

  • Kubernetes Networking
  • Container Network Interface (CNI)
  • Amazon VPC CNI plugin for EKS
  • Other Kubernetes CNI providers

Kubernetes Networking

1.a Pod networking and communication

Kubernetes is all about networking, as defined in the diagram pods will communicate:

  • Each pod will get its own IP from network or CIDR range
  • The IP of the pod is same throughout the cluster
  • Assigns POD IP (IP1, IP2, IP3 etc.) address to eth0 (first ethernet interface) by default
  • Communication/Access from one POD to another POD within the same node is through default Bridge (or through routing; in this example, Bridge is used)
  • Veth (Virtual ethernet interface) acts as a tunnel between Pods for communication
  • More than one container within the POD will be communicated through localhost
  • Communication/Access from one node to another node through overlay network (Encapsulated as packets and exchange between hosts or nodes)
  • All pods can communicate with all other pods without NAT (Network Address Translation)
  • All nodes can communicate with all pods without NAT

Note : All network configurations are created manually or programmatically. Missing some configuration may result in network failures.

Container Networking Interface (CNI)

CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple and vender-neutral.

CNI plugins are used by Kubernetes, Cloudfoundary, podman, CRI-O etc.

The illustration diagram below, describes CNI plugin (ex Flannel) used by Kubernetes:

Pod networking through CNI pluggin (Flannel)
  • Pods have their own IP addresses
  • When you install Kubernetes, you need to choose CNI plugin (in this example, Flannel is CNI plugin)
  • Node manages pod subnets and allocates pod IPs locally
  • Communication/Access from one pod to another pod is through CNI bridge (layer 2 bridge)
  • Flannel creates interface called Flannel 1, which is an interface between container runtime and network and is also used to configure network routes
  • Flannel interface communicates to another node through VXLAN (virtual extensible LAN, encapsulated as UDP packets and exchange between the hosts)

Note : Network configurations are taken care of by CNI plugin chosen during installation, which results in less network failures.

Amazon VPC CNI Plugin for EKS

As we discussed in the previous section, CNI plugin uses VXLAN, BGP or other overlay networks for communication and encapsulating the packets over network. In case of AWS, encapsulation is taking place at the hardware when using AWS Elastic Network Interfaces (ENI) inside a VPC. AWS CNI uses native networking and removes dependency on secondary network for encapsulation.

The illustration diagram below describes Amazon VPC CNI native plugin for EKS:

Pod networking through Amazon VPC CNI network
  • Each pod will get its own IP from VPC CIDR (either from primary or secondary IP range)
  • IP of the pod is same throughout the VPC as opposed to through out cluster
  • VPC CNI uses Native VPC networking, which results in high performance and is also highly scalable
  • Adoption of VPC features like flow logs, VPN, direct connectivity is straight forward as plugin built on VPC
  • Support to configure POD security groups, which allows or denies access to POD both from external and internal
  • Support of secondary CIDR IP ranges, so that PODs can have more IPs from secondary subnet
  • Another advantage of secondary IPs is Control network access from pods to outside cluster AWS Services
  • Maintained and supported by AWS

Note 1: Network configurations are taken care of by AWS VPC CNI plugin. All VPC features can be utilized as its native plugin.

Other Kubernetes CNI providers

Some CNI providers

Conclusion

Kubernetes networking model supports multi-host networking, where pods can communicate with each other without needing to exist on the same host. Interestingly, the Kubernetes project itself does not have a network model default implementation. So the understanding of container network interfaces plays major role while designing.

References

[1] https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html

[2] https://kubernetes.io/docs/concepts/cluster-administration/networking/

[3] https://www.youtube.com/watch?v=7LRtytR6ZbA&t=296s

[4]https://www.youtube.com/watch?v=U35C0EPSwoY&t=918s

[5]https://rancher.com/docs/rancher/v2.5/en/faq/networking/cni-providers/#cni-features-by-provider

--

--