Kubernetes Networking on AWS, Part II

Vilmos Nebehaj
Elotl blog
Published in
4 min readNov 11, 2019

Last time we looked at a simple networking setup for Kubernetes on AWS using kubenet. An alternative to it is the official AWS VPC CNI plugin, which is also the default CNI plugin on EKS, AWS’ managed Kubernetes service. Let’s take a look at how this CNI plugin works, and what its pros and cons are.

Pod networking via CNI

When it comes to CNI plugins that implement Kubernetes pod networking, there are two main types:

  • Overlay networks and
  • Layer 3 implementations.

Overlay networks use tunnels to route pod traffic among worker nodes, thus implementing a virtual network on top of an existing layer 3 network. Flannel and Weave are examples of CNI plugins that enable pod networking via an overlay network.

Overlay network, using VXLAN tunnels. Source: https://techcommunity.microsoft.com/t5/Networking-Blog/Introducing-Kubernetes-Overlay-Networking-for-Windows/ba-p/363082

The other main type, layer 3 CNI plugins, set up routing for pod network traffic without tunnels or any kind of overlay network. They usually rely on a routing protocol such as BGP, and require nodes to be routeable among each other. Calico and kube-router, for example, are layer 3 CNI plugins.

Layer 3 networking with BGP. Source: https://rancher.com/docs/rancher/v2.x/en/faq/networking/cni-providers/

Pod networking via the VPC CNI plugin

The AWS VPC CNI plugin implements another approach: it allocates IP addresses for pods from the VPC address space. AWS allows extra network interfaces (ENIs) to be added to EC2 instances, and these interfaces can have multiple IP addresses. Since all these IP addresses come from the same VPC address space, all containers and nodes are allowed to communicate directly, via the VPC network fabric, without any extra route configuration or an overlay.

Pod networking with the AWS VPC CNI plugin

For example, the figure above shows two worker nodes, both having a primary ENI and a second one, each with a primary and a secondary IP address. All pods can communicate with each other, even when they are running on different nodes, using the VPC network. No tunnels or a route distribution mechanism is necessary. It is a flat network, and this way interoperability with other AWS services or networking with services outside of the VPC can be handled via the regular VPC mechanisms.

Jumbo frames are supported by the plugin, so pods running on instances with high network performance can take advantage of this throughput.

Each EC2 instance type has limits on the number of ENIs and the number of secondary IP addresses it is allowed to use. For example, a t3.nano instance can only attach two ENIs, and have a primary and a secondary IP address on each ENI. This limits the number of IP addresses available to four. In addition, the primary IP address on each ENI is used as the source IP address when communicating with IP addresses outside of the VPC; so, if the number of ENIs an instance can attach are M, the number of IP addresses per ENI is N, then the effective number of IP addresses available to pods on the instance will be M*(N-1)+2 (the extra two comes from taking into account a kube-proxy pod and the aws-node pod; aws-node is the CNI pod). Thus, a t3.nano instance can only use two IP addresses, limiting the number of pods it can run to two, besides the two system pods mentioned above (unless some pods use host networking, which makes the pod use the IP address of the node). A handy list of per instance type limits is available here: https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt.

Unfortunately, aws-vpc-cni does not support attaching security groups to pods on a per pod basis (i.e., not via the node). To pin down security and limit communication with services in the VPC or outside of the VPC (for example, applications running in Kubernetes pods talking to a managed database), the best bet is defining NetworkPolicy resources. The AWS VPC CNI plugin does not support NetworkPolicy in Kubernetes, but adding Calico to the mix solves this issue.

Further implementation details on the AWS VPC CNI plugin can be found here: https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/cni-proposal.md.

Conclusion

The AWS VPC CNI plugin provides pod networking capabilities for Kubernetes clusters running on AWS with near native VPC network performance. It, however, also poses limitations on the number of pods worker nodes can handle at a time — this might be a problem if one wants to run a large number of small pods. It is also lacking when it comes to security group support.

Next, we will take a look at how networking is implemented with our nodeless Kubernetes runtime, which does not have the limitations mentioned above. Stay tuned!

--

--