Kubernetes on AWS: Step by Step

Kubernetes Advocate
AVM Consulting Blog
7 min readApr 29, 2021
K8s

In this article we provide step-by-step instructions for several common ways to set up a Kubernetes cluster on AWS:

  • Creating a cluster with kobs — kops is a production-grade tool used to install, upgrade and manage Kubernetes on AWS.
  • Creating a cluster with Amazon Elastic Kubernetes Service (EKS) — the managed Kubernetes service provided by Amazon. You can create a Kubernetes cluster with EKS using the AWS Management Console.
  • Creating a cluster with Rancher — Rancher is a Kubernetes management platform that eases the deployment of Kubernetes and containers.

Deploying Kubernetes on AWS Using Kops

Kops is a production-grade tool used to install, upgrade, and operate highly available Kubernetes clusters on AWS and other cloud platforms using the command line.

Installing a Kubernetes Cluster on AWS

Before proceeding, make sure to have installed kubectl , kops , and AWS CLI tools.

Configure AWS Client with Access Credentials

Make sure AWS IAM user has the following permissions for kops to function properly:

– AmazonEC2FullAccess
– AmazonRoute53FullAccess
– AmazonS3FullAccess
– IAMFullAccess
– AmazonVPCFullAccess

Configure AWSCLIi with this user’s credentials by running:

# aws configure

Create S3 Bucket for Cluster State Storage

Create a dedicated S3 bucket that will be used by kops to store the state representing the cluster. We’ll name this bucket my-cluster-state :

# aws s3api create-bucket --bucket my-cluster-state

Make sure to activate bucket versioning to be able to later recover or revert to a previous state:

# aws s3api put-bucket-versioning --bucket my-cluster-state --versioning-configuration Status=Enabled

DNS Setup

On the DNS side, you can go with either public or private DNS. For public DNS, a valid top-level domain or subdomain is required to create the cluster. DNS is required by worker nodes to discover the master and by the master to discover all the etcd servers. A domain whose registrar is not AW creates a Route 53 hosted zone on AWS and changes nameserver records on your registrar accordingly.

In this example,e we’ll be using a simple, private DNS to create a gossip-based cluster . The only requirement to set this up is for our cluster name to end with k8s.local.

Creating the Kubernetes Cluster

The following command will create a 1 master (an m3.medium instance) and 2 nodes (two t2.medium instances) cluster in us-west-2a availability zone:

# kops create cluster \ --name my-cluster.k8s.local \ --zones us-west-2a \ --dns private \ --master-size=m3.medium \ --master-count=1 \ --node-size=t2.medium \ --node-count=2 \ --state s3://my-cluster-state \ --yes

Some of the command options in the above example have default values: --master-size , --master-count , --node-size , and --node-count . We’ve used the default values so the result would be the same if we hadn’t specified those options. Also,o note that kops will create one master node in each availability zone specified, so this option: --zones us-west-2a,us-west-2b would result in 2 master nodes, one in each of the two zones (even if --master-count was not specified in the command line).

Note that cluster creation may take a while as instances must boot, download the standard Kubernetes components and reach a “ready” state. Kops provides a command to check the state of the cluster and check it’s ready:

# kops validate cluster --state=s3://my-cluster-state Using cluster from kubectl context: my-cluster.k8s.local Validating cluster my-cluster.k8s.local INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-us-west-2a Master m3.medium 1 1 us-west-2a nodes Node t2.medium 2 2 us-west-2a NODE STATUS NAME ROLE READY ip-172-20-32-203.us-west-2.compute.internal node True ip-172-20-36-109.us-west-2.compute.internal node True ip-172-20-61-137.us-west-2.compute.internal master True Your cluster my-cluster.k8s.local is ready

If you want to make some changes to the cluster, do so by running:

# kops edit cluster my-cluster.k8s.local # kops update cluster my-cluster.k8s.local --yes

Upgrading the Cluster to a Later Kubernetes Release

Kops can upgrade an existing cluster (master and nodes) to the latest recommended release of Kubernetes without having to specify the exact version. Kops supports rolling cluster upgrades where the master and worker nodes are upgraded one by one.

1. Update Kubernetes

# kops upgrade cluster \ --name $NAME \ --state s3://my-cluster-state \ --yes

2. Update the state store to match the cluster state.

# kops update cluster \ --name my-cluster.k8s.local \ --state s3://my-cluster-state \ --yes

3. Perform the rolling update.

# kops rolling-update cluster \ --name my-cluster.k8s.local \ --state s3://my-cluster-state \ --yes

This will perform updates on all instances in the cluster, first master and then workers.

Delete the Cluster

To destroy an existing cluster that we used for experimenting or trials, for example, we can run:

# kops delete cluster my-cluster.k8s.local \ --state=s3://my-cluster-state \ --yes

For further reading, see AWS Documentation: Manage Kubernetes Clusters on AWS Using Kops ›

Using Kubernetes EKS Managed Service

Amazon Elastic Container Service for Kubernetes (EKS) is a fully managed service that takes care of all the cluster setup and creation, ensuring multi-AZ support on all clusters and automatic replacement of unhealthy instances (master or worker nodes).

By default clusters in EKS consist of 3 masters spread across 3 different availability zones to protect against the failure of a single AWS availability zone:

Standing up a new Kubernetes cluster with EKS can be done simply using the AWS Management Console. After getting access to the cluster, containerized applications can be scheduled in the new cluster in the same fashion as with any other Kubernetes installation:

For further reading, see AWS documentation: Amazon EKS ›

Launching Kubernetes on EC2 Using Rancher

Rancher is a complete container management platform that eases the deployment of Kubernetes and containers.

Setting Up Rancher in AWS

Rancher (the application) runs on RancherOS, which is available as an Amazon Machine Image (AMI), and thus can be deployed on an EC2 instance.

Create RancherOS Instance on EC2

After installing and configuring the AWS CLI tool, proceed to create an EC2 instance using RancherOS AMI. Check RancherOS documentation for AMI ids for each region. For example this command:

$ aws ec2 run-instances --image-id ami-12db887d --count 1 --instance-type t2.micro --key-name my-key-pair --security-groups my-sg

will create one new t2.micro EC2 instance with RancherOS on ap-south-1 AWS region. Make sure to use the correct key name and security group. Also,o make sure the security group enables traffic to TCP port 8080 to the new instance.

Start Rancher Server

When the new instance is ready, just connect using ssh and start the Rancher server:

$ sudo docker  run  --name rancher-server -d --restart=unless-stopped \ 
-p 8080:8080 rancher/server:stable

This might take a few minutes. Once done, the UI can be accessed on port 8080 of the EC2 instance . Since by default anyone can access Rancher’s UI and API, it is recommended to set up access control.

Creating a Kubernetes cluster via Rancher in AWS

Configure Kubernetes environment template

An environment in Rancher is a logical entity for sharing deployments and resources with different sets of users. Environments are created from templates.

Create the Kubernetes Cluster (environment)

Adding a Kubernetes environment is just a matter of selecting the adequately configured template for our use case and inputting the cluster name. If access control is turned on, we can add members and select their membership role . Anyone added to the membership list would have access to the environment.

Add Hosts to Kubernetes Cluster

We need to add at least one host to the newly created Kubernetes environment. In this case, the hosts will be previously created AWS EC2 instances.

Once the first host has been added, Rancher will automatically start the deployment of the infrastructure (master) including Kubernetes services (i.e. kubelet, etcd,Kubee-proxy, etc). Hosts that will be used as Kubernetes nodes will require TCP ports 10250 and 10255 to be open for kubectl. Make sure to review the full list of Rancher requirements for the hosts.

It might take a few minutes for the Kubernetes cluster setup/update to complete, after adding hosts to the Kubernetes environment:

Deploying Applications in the Kubernetes Cluster

Once the cluster is ready containerized applications can be deployed using either the Rancher application catalog or kubectl.

For further reading, see Rancher documentation: Kubernetes ›

Other Options for Deploying Kubernetes in the Cloud

Besides the Kubernetes deployment options already covered, other tools can be used to deploy Kubernetes on public clouds like AWS. Each tool has its unique features and building blocks:

  • Heptio — Heptio provides a solution based on CloudFormation and kubeadm to deploy Kubernetes on AWS, and supports multi-AZ. Heptio is suitable for users already familiar with the CloudFormation AWS orchestration tool.
  • Kismatic Enterprise Toolkit (KET) — KET is a collection of tools with sensible defaults which are production-ready to create enterprise-tuned clusters of Kubernetes.
  • kubeadm — The kubeadm project is focused on a solution to build a simple cluster on AWS using Terraform. It is an adequate tool for tests and proofs-of-concept only as it doesn’t support multi-AZ and other advanced

👋 Join us today !!

️Follow us on LinkedIn, Twitter, Facebook, and Instagram

If this post was helpful, please click the clap 👏 button below a few times to show your support! ⬇

--

--

AVM Consulting Blog
AVM Consulting Blog

Published in AVM Consulting Blog

AVM Consulting — Clear strategy for your cloud

Kubernetes Advocate
Kubernetes Advocate

Written by Kubernetes Advocate

Vineet Sharma-Founder and CEO of Kubernetes Advocate Tech author, cloud-native architect, and startup advisor.https://in.linkedin.com/in/vineet-sharma-0164