Deploy Dockerized App to Kubernetes Cluster on EC2 with kops

Kubernetes Operations in short “kops” is a set Tools helps you to create, upgrade, destroy and maintain highly available, production-grade kubernetes cluster. Currently AWS (Amazon Web Services) is officially supporting for kops.
You can use this guide to create kubernetes cluster in ecs and run dockerized apps with it.
first of all you have to create an AWS Account. You can create an aws account by submitting your information and a valid Credit/Debit Card.
And after that you have to create IAM User Role.
IAM User Role

IAM Role is required for control permissions to create and destroy resources on AWS.
- AmazonEC2FullAccess
- IAMFullAccess
- AmazonS3FullAccess
- AmazonVPCFullAccess
- Route53FullAccess (Optional)

Create New EC2 Instance
You have to create a new EC2 instance for provisioning and tearing down the cluster. IF you are willing to use the aws free tire services only, you have to be careful. Most of AMI’s and Instance Types are not free tire eligible, So when you choosing, you should carefully select free tire services only. Otherwise you will be charged.

We will use this second AMI for this guide.

you should select t2.micro instance type, if you want to use free tire services. All other instance types are not eligible for free tire. Remember to specify the IAM User Role to the instance.

EC2 Dashboard will look like this, after you created the ec2 instance.
Provisioning the Cluster
Before provisioning, you need to connect to your instance. you could use,
- SSH Client (Connect using putty)
- EC2 Instance Connect (Browser Based SSH Connection)
- Java SSH Client
to connect to your instance.we will use putty for the connection.

Now we can create the cluster as soon as we install the necessary tools.
First we need to install kops, use the below statement to install kops into your aws linux
wget https://github.com/kubernetes/kops/releases/download/1.11.0/kops-linux-amd64

you need to grant execute permissions for kops.
chmod +x kops-linux-amd64

After that, kops need to move to a binary directory path. before you doing this you have to switch to the root user.
mv kops-linux-amd64 /usr/local/bin/kops

This Error is occurring if you are not the root user. you should use
sudo su
before move the path.
That worked !.
you need to install kubectl next,
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

grant execute permissions,
chmod +x ./kubectl

move to binary directory path,
sudo mv ./kubectl /usr/local/bin/kubectl
Now the installations are done, Next you have to create a s3 bucket.

All of the cluster configuration of yours is storing in this S3 bucket. After creating the s3 bucket we have to set it as an environment variable. Then kops can automatically detect it as the s3 bucket.
export KOPS_STATE_STORE=s3://testaws
you have to set the region as well as an environment variable. we will use southeast region for this cluster.
export REGION=ap-southeast-1
Then you have to set a name for the cluster. Be careful when you're setting the name,
Because we are not using Route 53. we are using gossip based cluster, So we should use suffix .k8s.local to the cluster name. That’s the only way we can create a gossip based cluster. gossip based cluster is using internal hosted DNS. Thats why we don’t have to buy Route 53 Domain.
export KOPS_CLUSTER_NAME=testaws.k8s.local
Before we creating the cluster we have to specify a ssh public key, without this we can have some errors.

Error is occurring because we didn't generate a ssh public key, so we have to generate it.
ssh-keygen

Create Cluster
Finally now we can create the kubernetes cluster with kops.
kops create cluster — master-size t2.micro — master-volume-size 15 — node-size t2.micro — node-volume-size 10 — zones=ap-southeast-1a — yes

In this statement ,
- master-size -: Used to specify the master node instance type.
- master-volume-size -: Used to specify the size of the EC2 instance.
- node-size -: Used to specify the worker nodes instance type.
- node-volume-size -: Used to specify the worker nodes volume size.
- zones -: Used to specify the zone that we want to create the cluster.
- yes -: It means have to create this cluster by force.
Other Than these options you could use following options too.
--vpc
you can use custom vpc or share a vpc
--master-zones
can specify the zones to run master
--master-count
can specify more masters in one or more vpcs
--node-count
increase the total nodesAfter creating the cluster you have to update the cluster. this will apply the changes to the cloud.
kops update cluster — name ${KOPS_CLUSTER_NAME} — yes

Now hopefully our cluster is ready, to ensure that we should validate the cluster.
kops validate cluster

We can see the validation errors because still the cluster is updating in the cloud.

Now we can see the Cluster is finally ready. we can deploy our apps in this cluster now.
Deploy an Application
Recently i have been uploaded a docker image to the docker hub, we can use that docker image to test the cluster we created.
kubectl run awstest — image=docker.io/dgamidu/awstest:0.0.1-SNAPSHOT — port=8080

This statement will pull the image from the repository and create a container on the cluster. This application is running on port 8080, if the application doesn’t require a port you can simply remove it.
If you want to see the running containers you have to call the pods.
kubectl get pods

We can see the STATUS is showing running, this STATUS can be ContainerCreating if you run this statement immediately.
Now we have to expose the deployment through a Load Balancer.
kubectl expose deployment awstest — type=LoadBalancer — port=8080 — target-port=8080

Now the app is exposed, we have to find out the External IP of our container.
kubectl get svc

“awstest” application has been exposed on the external IP and the port through a Load Balancer.
And Now we have an application running on kubernetes cluster in aws. This guide is about developing a basic kubernetes gossip based cluster and run dockerized app in it. If you want to deploy production level application you should use deployement.yaml to create replica sets or remove existing Deployments and to adopt resources.
