Kubernetes Kops cluster on AWS

Deepak kumar Gunjetti
andcloud.io
Published in
2 min readOct 9, 2019

Kops

Kops provides a Production Grade K8s Installation, Upgrades, and Management. It is especially handy on AWS as you may choose to use kops instead of EKS to create kubernetes cluster on AWS.

Below are steps to create a test cluster using kops.

Install kops binary

brew update && brew install kops

Install AWS Cli

brew update && brew install awscli

Setup IAM User

Create kops user with following permissions

AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess

Create kops user using aws cli

aws iam create-group --group-name kopsaws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops
aws iam create-user --user-name kopsaws iam add-user-to-group --user-name kops --group-name kopsaws iam create-access-key --user-name kops

Record the SecretAccessKey and AccessKeyID in the returned JSON output.

configure the aws client to use your new IAM user

aws configure

export SecretAccessKey and AccessKeyID

export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

Create S3 bucket for cluster state storage

aws s3api create-bucket \
--bucket myfirstcluster-state-store \
--region us-east-1

Create gossip based cluster

set up environment variable for cluster name and S3 bucket used to store etcd state. For gossip cluster name must end with k8s.local

export NAME=myfirstcluster.k8s.local
export KOPS_STATE_STORE=s3://myfirstcluster-state-store

Create cluster configuration

kops create cluster \
--node-count 3 \
--master-size t2.medium \
--authorization RBAC \
--networking canal \
--zones us-east-1a \
--ssh-public-key ~/.ssh/id_rsa.pub \
--cloud=aws \
--topology=private \
--name ${NAME}

This creates cluster state in S3 bucket.

Create a new ssh public key called admin

kops create secret sshpublickey admin -i ~/.ssh/id_rsa.pub --name ${NAME}

Build the Cluster

kops update cluster ${NAME} --yes

Wait and check cluster creation

kops validate cluster

check nodes and cluster components

kubectl get nodes
kubectl -n kube-system get pods

we have simple kops cluster to test.

Add bastion node

To access the nodes within cluster in private topology we use bastion node

kops create instancegroup bastions --role Bastion --subnet utility-us-east-1a --name ${NAME}

save the file

Update the cluster

kops update cluster ${NAME} --yeskops validate cluster

Get ELB address created for bastion node

bastion_elb_url=`aws elb --output=table describe-load-balancers|grep DNSName.\*bastion|awk '{print $4}'`

Use ssh-agent to access bastion

eval `ssh-agent -s`

Add AWS key pair .pem file

ssh-add -K <keypair>.pem

ssh to bastion

ssh -A admin@${bastion_elb_url}

ssh to master or worker nodes

ssh <master_ip>

To Delete the cluster

kops delete cluster --name ${NAME} --yes

--

--