Setup resilient Kubernetes cluster on AWS using KOPS

Hari Manikkothu
Kubernetes
Published in
5 min readOct 22, 2019

This article shows the step by step process of installing a resilient kubernetes cluster on AWS using KOPS ( Kubernetes Operations)

Prerequisite

Architecture

This article assumes the installation using a public dns owed by the user. The KOPS installer will create an api server alias record in route 53 which points to the three controller nodes.

The management interface will be using api.* sub domain, which points to the controller nodes. Kops installer creates auto scaling groups for the worker nodes and controller nodes by default.

AWS configure

Follow the steps below to configure aws CLI, create IAM user with appropriate permissions, and to generate AccessID and key for API access.

Init aws CLI

Bootstrap aws CLI configuration as per the guideline from the official documentation here. Setup the environment variables with appropriate accessID and key for the initial admin user, or use other secure methods.

Create IAM user for KOPS installer

# create group
$ aws iam create-group --group-name kops
# attach required policies
$ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
$ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
$ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
$ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
$ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops

# create IAM user
$ aws iam create-user --user-name kops

# add the user to kops group to inherit the policies
$ aws iam add-user-to-group --user-name kops --group-name kops

Generate AccessID and SecretKey

$ aws iam create-access-key --user-name kops

Reconfigure aws CLI with the kops user access key

$ aws configure
AWS Access Key ID[*********XYZ]:<new-access-key>
AWS Secret Access Key[[***********pqrs]:<new-secret-access-key>]
Default region name[us-east-1]:us-east-1
Default output format[JSON]:JSON
# Export the access key and secret for the KOPS to use
$ export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
$ export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

Setup DNS

The instructions here assumes that a public domain is registered with AWS. KOPS can be set up to use a subdomain under this to set up the cluster. Let’s say the main domain is ‘mydomain.com’, create a new hosted-zone in route-53 for the kops as follows.

# jq is a JSON parsing utility, which can be installed linux system using the command 'sudo apt install jq'.# create hosted-zone for the kops subdomain
ID=$(uuidgen) && aws route53 create-hosted-zone --name kops.mydomain.com --caller-reference $ID | \
jq .DelegationSet.NameServers
>> Save the output, which will look like this:
[
"ns-759.awsdns-30.net",
"ns-1428.awsdns-50.org",
"ns-72.awsdns-09.com",
"ns-1704.awsdns-21.co.uk"
]

The output from the above command is needed to create NS records for the kops sub-domain.

Create a JSON file ‘kops.mydomain.json’ with the above name server values.

{
"Comment": "Create a subdomain NS record in the parent domain",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "kops.mydomain.com",
"Type": "NS",
"TTL": 300,
"ResourceRecords": [
{
"Value": "ns-759.awsdns-30.net"
},
{
"Value": "ns-1428.awsdns-50.org"
},
{
"Value": "ns-72.awsdns-09.com"
},
{
"Value": "ns-1704.awsdns-21.co.uk"
}
]
}
}
]
}

Get the hosted-zone-id of the main/parent domain

$ aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="example.com.") | .Id'

Run the following command to set the NS records.

$ aws route53 change-resource-record-sets --hosted-zone-id <parent-zone-id> --change-batch file://kops.mydomain.json

Test the DNS setup as follows

$ dig ns kops.mydomain.com>> The output will contain something similar to the following: 
;; ANSWER SECTION:
kops.mydomain.com. 21599 IN NS ns-1428.awsdns-50.org.
kops.mydomain.com. 21599 IN NS ns-1704.awsdns-21.co.uk.
kops.mydomain.com. 21599 IN NS ns-72.awsdns-09.com.
kops.mydomain.com. 21599 IN NS ns-759.awsdns-30.net.

S3 Cluster State Storage

Create an S3 bucket for the kops to use as state storage during the installation and other operations.

$ aws s3api create-bucket --bucket kops-mydomain-com-state-store --region us-east-1# Optionally, versioning can be enabled for the S3 state store bucket to have the ability to revert to certain states.
aws s3api put-bucket-versioning --bucket kops-mydomain-com-state-store --versioning-configuration Status=Enabled

Setup SSH access key

First, generate an ssh key pair ‘kops_rsa’ (the name can be any) using ssh-keygen utility.

Set the key info for the kops to use during the cluster creation as follows

$ kops create secret --name kopscluster.kops.mydomain.com sshpublickey admin -i ~/.ssh/kops_rsa.pub

Create Cluster Configuration

Typically there should at least 3 worker nodes and 3 controller nodes distributed to three separate Availability Zones to support a high availability architecture. Use the following command to create cluster with a high availability supported deployment (Please note that there are many other options supported by the KOPS installer, detailed discussion of that is out of scope for this article)

‘create cluster’ command without the ‘--yes’ param will generate the cluster configuration first, which can be reviewed and edited before applying the configuration to generate the cluster on AWS.

# Generate cluster config
$ kops create cluster --node-count 3 --zones us-west-2a,us-west-2b,us-west-2c --master-zones us-west-2a,us-west-2b,us-west-2c ${NAME}
# Review/edit cluster config
$ kops edit cluster ${NAME}

Note that the cluster can be generated on a different region than the default aws CLI configuration.

Build Cluster

Now the above generated config can be applied to generate the cluster on AWS as follows.

$ kops update cluster ${NAME} --yes

Output will look something like this:

Verify

Verify the cluster creation after a few minutes using the ‘kubectl get nodes’ command, all 6 nodes should be listed with status ‘READY’.

Use ‘kubectl get all’ command to view the cluster.

References

--

--

Hari Manikkothu
Kubernetes

kubernetes enthusiast | AWS certified Solution Architect