Installing Weave CNI on AWS EKS

Introduction

Samuel Addico
CodeOps
5 min readMar 10, 2019

--

In this article, I will be demonstrating how to opt-out using AWS VPC CNI plugin and installing an overlay network plugin such as Weave CNI plugin.

What is AWS EKS?

Amazon launched its managed kubernetes service: EKS, sometime last year 2018. Eager to put on my learning hats and brave the scars of living on the cutting edge, I decided to jump on this train. I must confess AWS EKS is really handy as it eliminates all the hardships (especially the networking bit of K8s) involved in setting up your own K8s cluster and the operations of having to manage it.

One of the interesting features of EKS for which I am often inclined to it is how it simplifies the Kubernetes network stack (basically pod networking) and enables micro-segmentation across security groups. using its pre-configured CNI plugin: AWS’s VPC CNI. What this means is your pods are assigned IPs as though they were EC2 instances and also could have some security rules applies. A few advantages here

  • Pod network connectivity performance is similar to AWS VPC Network. That is, low latency, minimal jitter, high throughput, and high availability
  • Users will be able to express and enforce granular network policies and isolation comparable to those achievable with native EC2 networking and security groups
  • Network operation is simple and secure. You don’t incur the overhead of encapsulation and de-encapsulation as you do with Overlay networks such as Flannel, Calico etc.
  • VPC Flow Logs is can be enabled and works
  • VPC Routing Policies works, traffic from the VPC can be directly routed to pods.
  • There’s less contention for network bandwidth because fewer pods are sharing an Elastic Network Interface (ENI)

By the way, this plugin is open-sourced to boot

So what is the Problem?

The VPC CNI plugin has its own set of challenges, however. For example, the EC2 instance type and size determine the number of pods you can run on an instance. And there are situations where attaining higher pod density will force you to over-provision the instance types you use for your worker nodes. Your VPC may also be so IP constrained that you cannot afford to assign an IP address from your VPC to your pods, though the VPC CNI custom networking feature attempts to address this by allowing you to specify a separate set of subnets for your pod network. Despite the VPC CNI’s advantages, folks may still want to use another CNI with EKS for various reason such as the one just explained above.

Disable AWS CNI

kubectl delete ds aws-node -n kube-system
kubectl get pods — all-namespaces

Installing CNI-Genie

So well like I aforesaid, we will be disabling the AWS VPC CNI and install our preferred network overlay Weave CNI. But before doing that, we need to install CNI-Genie. CNI-Genie is a CNI plugin that enables Kubernetes to simultaneously have access to different implementations of the Kubernetes network model in runtime. CNI-Genie also supports assigning multiple IP addresses to a pod, each from a different CNI plugin.

Note you need to set up only the master node without joining the worker nodes till you are done with the installation.

kubectl apply -f https://raw.githubusercontent.com/Huawei-PaaS/CNI-Genie/master/conf/1.8/genie-plugin.yaml

Installing Weave CNI Plugin

Now that we have installed cni-genie plugin we are ready to install the Weave plugin.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl get daemonset weave-net — namespace kube-system

As you can see the counts are all showing 0 because we have got not worker node attached.

Launch EKS Worker Nodes

Now you can proceed with creating the worker nodes and joining them to the master. Because this post is not about setup the EKS cluster, I won't be going through this. However, AWS has well detailed a guideline for launching the worker node here . Once this is done, you should see output as below on running below command :

kubectl get pods --all-namespaces
kubectl get pods — all-namespaces
kubectl get daemonset weave-net — namespace kube-system

As you can see, all our pods are running fine now. Now let try deploy an application and see it assigned an IP address by weave-net as opposed to being assigned an IP by AWS VPC.

kubectl apply -f employee-mngt-deployment.yaml
kubectl get pods

Now let confirm if weave assigned an IP

kubectl describe pod employee-managment-6d66594b86-8xbhp
kubectl describe pod employee-managment-6d66594b86–8xbhp

Add there we have it, 10.40.0.2 was assigned.

Thanks for reading this far. If you found this post helpful, I’d really appreciate it if you recommend this post (by clicking the clap button) so others can find it too! Also, follow me on Medium. Thanks again !!

References

--

--

Samuel Addico
CodeOps

DevOps | SRE | Java | Kubernetes | AWS | Ansible | Terraform | CI/CD