How Amazon EKS will play an awesome role for Kubernetes customers.

Nikit Swaraj
MiQ Tech and Analytics
8 min readMar 6, 2018

--

During AWS:reinvent 2017, AWS released a new service called Amazon EKS (Amazon Elastic Container Service for Kubernetes). This new service helps run Kubernetes open source container management framework at scale on AWS. Recently, I used its preview mode and have penned down my thoughts on the world before EKS and after EKS.

The Beauty of Kubernetes

Even though Kubernetes has been around for a while now, it’s only recently that it has taken the DevWorld by storm. It has gained a lot of traction among AWS customers too.

As an open source container management framework, it helps run containers at scale. It’s equipped with a bunch of features and functionalities that help:

  • Build or run microservices
  • Build distributed applications in the 12 factor app pattern
  • Automatic binpacking
  • Self-healing
  • Horizontal scaling
  • Service discovery and load balancing ( TCP — layer 4 )
  • Automated rollouts and rollbacks
  • Batch execution
  • Kubernetes can run anywhere

These features together as a package gives you primitives for building modern applications.

Why Developers Love Kubernetes?

Kubernetes comprise tools that are required to solve modern application problems, which is why it is popular among the developer community. Here are a few reasons why developers are excited about it.

A. Vibrant and a growing community of users and contributors

Kubernetes has an amazing and enthusiastic developer community. There are a number of metrics that justifies this fact, such as the popularity of any given project on Github. There are a couple of websites that try to quantify its popularity. Kubernetes is on the top in ranking in Github in terms of discussion of project. Kubernetes repositories on Github has almost 400,000 comments, about 30,000 stars, 60,000 commits, and almost 1500 individual metrics.

In terms of release and commit velocity, it’s number one on Github today. Here are some more facts why it is popular:

  • Kubernetes can run anywhere: Yes! It can run anywhere, either on your laptop in the form of Minikube, on-premise, or on the cloud.
  • It features a single extensible API:

Kubernetes API is extremely powerful. The API can be thought of as a single abstraction layer that can help abstract resources both within cloud and on-premise. When you’re using Kubernetes on AWS, you can take advantage of the underlying platform. You get all the scale, the performance, the reliability, the breadth of features that come with AWS via Kubernetes cloud integrations. But you can also use that same API on premise or on your laptop. Thus, Kubernetes makes it easy when you’re developing and then you can move things to the cloud when ready.

B. Supports cloud-native applications

All the functionality of Kubernetes that comes with the package are actually building blocks for cloud native applications. These functionalities support microservices and cloud native apps, which means where you run your kubernetes actually matters.

The quality of the underlying platform like scalability, speed, stability and the integration with with the platform will make it easier to build and deploy an application using kubernetes. Everytime you don’t have to spin up your own resources. Let’s say if there is something nice on AWS and you want to use, that you can easily use it with kubernetes. For e.g. If you need a Load Balancer for your application deployed on Kubernetes, you don’t need to implement your own load balancer, instead just use the native one.

Most of the kubernetes users believe that there’s value in running Kubernetes on AWS. The CNCF published a survey last year says that over 60% of Kubernetes workloads run on AWS today.

Kubernetes Deployment Architecture

Here’s a walkthrough on how developers generally deploy a Kubernetes cluster on AWS.

Below is the typical architecture diagram of Kubernetes Cluster on AWS.

This is a natural deployment pattern where you run masters and etcds across three AZ’s for HA control panel. Each Kubernetes master essentially runs a copy of the same components. In addition to the masters, you also need to run etcd, which is the core persistence layer for Kubernetes. This is basically where all the critical metadata for your cluster lives. If you lose your etcd cluster in one availability zone, then etcd lying in other availability zone will take care of it.

Finally, you need to run the actual worker nodes. This is where your applications run. These are generally deployed in auto-scaling groups across multiple AZs. You have a lot of control over the instance type, as well as the freedom to use on-demand, reserved instances and whatever instance type suits your needs.

This whole stack is generally a source of worry. A lot of conversations that people have are like “Yeah! we’re worried about this thing failing over in the middle of the night, and we’re having a hard time forecasting our growth making sure that we can seamlessly upgrade.”

We can make sure we’re running the right no/instance type of nodes and all this doesn’t come crashing down in the middle of night. There was a lot of feedback that AWS received in the past 6 to 12 months and here are few of them:

  • Can AWS Run Kubernetes for Me?

AWS customers wanted support where they did not have to worry or spend time on deploying Kubernetes cluster. They wanted a solution where they did not have to think about configuration management, etcd, and Kubernetes master’s HA.

  • Provide Capability to Perform Native AWS Integrations.

Today’s developers want to take advantage of the breadth of the AWS platform. They always want to use top-notch AWS integration. There are AWS resources that are supported well in Kubernetes, but not everything. There were few lacking features and these might be the ones that you’re already using in other places in your application stack on AWS.

Thus, the above mentioned points made AWS to develop and release EKS.

In essence, EKS is a platform for enterprises to run production-grade workloads. It provides features and management capabilities to allow enterprises to run real workloads at scale for reliability, visibility, scalability and ease of management.

EKS provides a native and upstream Kubernetes experience. Any modifications or improvements made on the back end, perhaps in building service, will be transparent to the Kubernetes end user experience.

If EKS customers want to use additional AWS services, the integrations are seamless.

Now with EKS, master and ETCD will be managed by AWS and the worker nodes will be taken care by the users. So, at the end, it will look like the diagram as shown below.

What’s nice is that the complete control panel is really simplified. So, instead of running the Kubernetes control panel in your account, you connect to Manage Kubernetes endpoint in the AWS cloud.

The endpoint abstracts the complexity of the Kubernetes Control panel and your worker nodes check in to this endpoint. You can interact with kubectl with this endpoint and replace all the complexity of running your Kubernetes Control panel.

Kubernetes Deployment using EKS

Here’s a guide on how to create a Kubernetes cluster in AWS Console.

1. Open the EKS preview page as shown below and click on Create Cluster:

A page opens where the user needs to fill cluster name, K8S version, VPC-ID and Role ARN.

Note: Currently, EKS supports Kubernetes 1.7.

2. Key in the VPC ID. VPC tells us where the worker nodes will run in a user account and where a user needs to create resources in their account.

3. Key in the Role ARN. The IAM role ARN used by EKS manages resources in a user’s account for the Kubernetes master.

4. Click on Create to launch a Kubernetes cluster. This will take approximately 6 to 7 minutes to setup a K8S cluster. The Master endpoint is ready once the cluster is up and running.

5. Copy the master endpoint and put it in .kube/config of home directory as shown below:

6. Deploy the pods here.

Support for Command Line

Amazon EKS also provides AWS CLI commands to perform operation with EKS cluster.

If you want to create a cluster then you can use the below command line:

aws eks create-cluster — cluster-name nickk8s — desired-master-version 1.7 — role-arn arn:aws:iam::23xxxxxxx:role/k8s-role

If you want to describe cluster details and get the metadata, you can use the below command line.

aws eks describe-cluster — cluster-name nickk8s

The cluster metadata will look like this:

To list the clusters, use the below command line.

aws eks list-clusters

To delete the clusters, use the below command line.

aws eks delete-cluster — cluster-name nickk8s

EKS Master Visibility

You must be wondering by now what’s going on inside Kubernetes master and how to get visibility into it? How do we look at the metrics and logs of clusters? Is there any difference between logging in worker and master? The answer is Yes!

EKS provides all API logs to CloudTrail. When you call eks create-cluster, all API logs will go into CloudTrail and thus it will be available for you. Just like any other AWS service all the API layer logs will be in CloudTrail. But there are some logs that exists on the masters, like kube-api server logs, the kube-scheduler logs, the kube-controller manager logs, etc. will be available in CloudWatch logs. So later you can collect it and visualize using ELK stack.

Additionally, you can also install Kube Add-ons on worker nodes.

EKS also provides additional layer of Authentication with AWS IAM. Below diagram explains it all.

So these were some glimpse of EKS (in preview). I’m just waiting for EKS to be GA. Are you too?

Share your views in the comment section.

--

--