TECH CORNER — Read this before running Kubernetes with Amazon Web Services’ EKS
With clients in a range of industries, sclable is accustomed to delivering custom digital solutions while operating in a variety of pre-existing systems. In this post, our DevOps Specialist Manfred Krieger recaps the path the team took to run Kubernetes on Amazon Web Services’ Elastic Kubernetes Service*.
Nowadays, all major cloud providers offer some kind of managed Kubernetes service. Google Cloud Platform has GKE (Google Kubernetes Engine), Microsoft Azure has AKS (Azure Kubernetes Service) and Amazon Web Services (AWS) offers EKS (Elastic Kubernetes Service). That’s because the major benefit of running Kubernetes with a managed service is the abstraction of various cloud services into the Kubernetes API, which in turn gives you access to cloud resources. In other words, if you define a bunch of Kubernetes API objects in the form of YAML files for your application, you should be able to deploy it to any Kubernetes service — AWS, Microsoft Azure or Google Cloud Platform.
However, since Kubernetes is the interface between a cluster administrator and the cloud provider it’s running on, you need to be aware that each provider has its own perspective on what a “ready-to-use cluster” means.
While working with AWS’ EKS, we made some interesting discoveries, which wethink you should consider before deploying your first application.
The initial EKS deployment
A Kubernetes cluster consists of a variety of components, which must be set up and configured. There are obvious components of course. Like your actual instances that host your applications and the network that connects your instances. There are also other components that need to be set up too, like access-management for users, network resources, and the encryption of your cluster state and storage management.
When we recently set up an EKS cluster, one the first things we found out is that there are multiple deployment options to make initial installations easier. There are dedicated CLI tools (called “ekscli”), there is a Terraform module, and you can also deploy directly via the AWS console.
We also quickly learned about some additional topics that were not covered by the initial installation:
- User access management via AWS IAM (Identity Access Management)
- Metrics and log aggregation via container insights and CloudWatch
- Horizontal node autoscaling
- Ingress access
So, let’s cover each one of them in detail.
User access management via AWS IAM
It’s important to know that the user who creates the EKS cluster automatically becomes the cluster administrator. Only that user! So, it’s critical to consider which user should create your cluster in the first place. You don’t want to end up with a faulty permission that leads to an inaccessible cluster. Happily, EKS offers multiple ways to adapt cluster permissions. Our approach was configuring the aws-auth ConfigMap and using the AWS IAM Authenticator.
Metrics and log aggregation via container insights and CloudWatch
Keeping an eye on the metrics and logs of your cluster and the applications running on your cluster is the key to detecting potential issues and opportunities for improvement. EKS gives you the freedom to determine how your cluster will report such metrics.
During our recent project, we used a set of Helm Charts to install all the components necessary for reporting. Container insights, for example, to report metrics (e.g. CPU and memory usage) and FluentBit to push container logs to AWS CloudWatch. The Helm Charts gave us a good opportunity to get this functionality up-and-running in no time. They do, however, require some additional steps to establish. Namely, ensuring AWS access within a Kubernetes workload. To achieve this we used (as recommended by AWS) an OIDC provider to connect an AWS IAM role to a Kubernetes service account. Essentially, the OIDC provider connects them by translating the Kubernetes role to the AWS IAM role, enabling a pod to assume a particular AWS IAM role. The pattern is quite straightforward: you create an AWS IAM role, assign the permissions, add a trust policy to the Kubernetes service account, and then create and annotate your service account so the pod using the service account is allowed access to AWS.
Clearly, this approach required lots of components. It did, however, encapsulate concerns so that workloads only have access to the things they need.
Horizontal node autoscaling
Node autoscaling allows you to dynamically scale-up or scale-down underlying instances depending on the requested resources of your application. Using it in combination with horizontal pod autoscaling is a great way to be reactive to computational spikes in a cost-effective way.
It should be noted however, that no node autoscaling is enabled on a freshly installed EKS cluster. AWS gives you the choice of different implementations, but they must be manually installed.
As with metrics and log aggregation, a dedicated workload directly on EKS is responsible for adding and removing instances automatically. Therefore, it also needs access to the AWS API. To do this, we found that AWS offers two integrations: Karpenter and Cluster Autoscaler. Karpenter has some interesting features, including deciding which instance size is right for your workloads, but we used the more mature Cluster Autoscaler for our project. In addition to being sufficiently functional for the use case, we are more experienced in using it.
After adding the required IAM roles and trust policies, we installed the Cluster Autoscaler via its Helm Chart and were able to properly scale the instances on our cluster.
Ingress access
During our project, we deployed a web application on the EKS cluster that is externally accessible and found the integration of creating load-balancer services inside EKS is set up and works as expected. In essence, a dedicated, cloud-managed Elastic Load Balancer was created without any additional settings required on the cluster.
To ensure the Elastic Load Balancer is correctly set up, there are some things to watch out for, and they depend on how you deployed your cluster. For example, the VPC usually contains subnets, which are dedicated to internal and external traffic and they can be categorized as either public or private. EKS-specific labels on your subnets also define if a load balancer service is public or private.
After successfully setting up ingress access, we used the NGINX ingress controller to wire up external access to our application.
The bottom line
We believe that installing EKS with the tools provided by AWS makes it easy to get a cluster up and running quickly. By giving some thought to the topics we just laid out, you (like us!) can certainly reach a solid baseline upon which to productively run applications.
- Please keep in mind that all the services mentioned here are constantly evolving. Always double check the official Amazon Web Services documentation on Elastic Kubernetes Service before getting started yourself.
This article was written for Sclable’s blog on Medium by our DevOps Specialist, Manfred Krieger. Follow us on LinkedIn to get notified of new posts or check out our website to see the work we do!
If you liked it, give it a 👏 and share if you ❤️.