Kubernetes on Centos 7 with HA in AWS or Onpremise. Deploy one app with Circle Ci.
--
History:
Kubernetes is an open-source platform orchestration of containers, this technology is used by Google in its apps.
This project was donated by Google to Cloud Native Foundation https://www.cncf.io/ and this project recive the colaboration of multiple cloud vendors such as AWS,Azure,Cloud Google Platform ,Bluemix , Red Hat and Alibaba Cloud.
In the last years Kubernetes became very popular amoung developers.
This big impact places Kurnetes in a different position, as it is now a collaborative platform for vendors in the cloud, which is rapidily increasing..
After my first installation many things changed (https://www.itshellws.org/kubernetes/). For example, the networking management which now has a cni plugin or the new tool ‘kubeadm’ to make the installation easier.
Nowadays there’s a tool called Kops. This tools is very good and it is used to create automatic cluster of kubernetes in AWS but if you want more control of cluster maybe you prefer a manual installation.
For more info of kops https://github.com/kubernetes/kops and https://www.nivenly.com — Kris Nova (she is a writing in this blog).
Details:
- Kubernetes masters tree in HA —kubernetes Latest version
- Docker version 17.0.3
- SO Centos 7 in provider AWS.
- Example of CI with Circle CI.
Prerequisites
- At least six ec2 instances: three for master ,two for minion and one nginx loadbalancer.
Install etcd and apiserver in master nodes. In this example I use instances type two t2.micro and one t2.medium.
- VPC with 3 subnets multi-az for HA.
I use the VPC default created for amazon in these regions.
Details :
VPC — IPv4 CIDR block 172.31.0.0/16
SUBNETS
172.31.0.0/20 | Availability Zone — 2a
172.31.16.0/20 | Availability Zone — 2b
172.31.32.0/20 | Availability Zone — 2c
NODES
masters:
172.31.27.31 kub01 | Availability Zone — 2b
172.31.41.254 kub02 | Availability Zone — 2c
172.31.12.35 kub03 | Availability Zone — 2a
load-balancer:
172.31.12.10 kublb01 | Availability Zone — 2a
minions:
172.31.12.20 minion1 | Availability Zone — 2a
172.31.12.21 minion2 | Availability Zone — 2a
- Approximately about 30 minutes of your life to learn more 😄
Architecture:
Starting to Work:
Open your AWS account and select the region for work. I use Ohio region -this region in my account is empty- .
VPC — default.
SUBNETS
172.31.0.0/20 | Availability Zone — 2a
172.31.16.0/20 | Availability Zone — 2b
172.31.32.0/20 | Availability Zone — 2c
Launch Instance — Centos 7
Select Centos 7
Free tier — 😆
I’m select t2.micro .
Start the installation in kub03– 172.31.12.35 | Availability Zone — 2a
Note: Don’t start installing in kub 01 or 02.
Copy this code in Advanced Details .
https://raw.githubusercontent.com/nightmareze1/kubernetes-1.8-ha/master/kub03.sh
Tags: Name — kub03
Ports Requeriments | Create a new security group.
If you already created the security group, launch your instance
Proceed to launch the second instance — kub02
The second node is kub02 — config IP 172.31.41.254 | Availability Zone — 2c
Copy this code in Advanced Details .
https://raw.githubusercontent.com/nightmareze1/kubernetes-1.8-ha/master/kub02.sh
Then configure it in a similar way to Kub03 (storage, security groups )
change the tag Name (kub02) and launch.
Proceed to Launch kub01. This node is the last master — The script needs instances kub03 and kub02 running and the status check must be 2/2 passed (or ticked in green).
Launch instances and select Centos 7 and its type.
node is kub01 — config IP 172.31.27.31 | Availability Zone — 2b
Copy this code in Advanced Details and edit lines for your domain FQDN.
https://raw.githubusercontent.com/nightmareze1/kubernetes-1.8-ha/master/kub01.sh
Then configure it in a similar way to other servers (storage, security groups )
change the tag Name (kub01) and launch.
Edit lines:
In 2 minutes your Cluster — Etcd and kubernetes , will be working.
Wait until the status check is 2/2 checks passed and connect to ssh in instances.
connect to servers and check services etcd , kubelet , etc.
The masters is running with HA for etcd service. Very Good 😃
Proceed to launch — LB kublb01 172.31.12.10 | Availability Zone — 2a
Copy this code in Advanced Details .
https://raw.githubusercontent.com/nightmareze1/kubernetes-1.8-ha/master/lb-nginx
Then configure it in a similar way to other servers (storage, security groups )
change the tag Name (kublb01) and launch.
connect to lb and check nginx service is ok
In the last step I’m going to install one minion. But first I have to generate a new cluster.
# Generate token
sudo openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der | openssl dgst -sha256 -hex
sudo kubeadm token create — groups system:bootstrappers:kubeadm:default-node-token
sudo kubeadm token list
save the generated data and proceed to launch the minion.
Copy this code in Advanced Details and edit kube join data.
Replace the token and hash generated.
https://raw.githubusercontent.com/nightmareze1/kubernetes-1.8-ha/master/minion2.sh
Then configure it in a similar way to other servers (storage, security groups )
change the tag Name (minion2) and launch.
Wait until the status check is ticked in green and connect to minion2
run script ./join
The minion connects to kubernetes api in Load balancer .
check nodes status in daemon kubectl.
I recommend to configure more dns replicas or horizontal autoscaling.
Command to scale replicas dns:
kubectl scale — replicas=3 deployments/kube-dns -n kube-system
The next step is to upload one application in the cluster.
Please install git in one master and clone my repo tag version 30.
yum install git -y
git clone https://github.com/nightmareze1/alpine.git && cd alpine && git checkout tags/v0.0.30 -b v0.0.30
This repo contain the config for CI with Circle.
I guess you have a circle ci account if not, creat one.
Circle CI: I created a free account. It contains 1500 minutes for deploy.
The next step is to configure project — Click on Project and select your app repo.
In the project configure secrets vars.
The data for Certificate_* and client_* exists in file ~/.kube/config in kub01 or 2/3.
LB-FQDN: create FQDN in your domain and apoint to ELB — PUBLIC IP.
kube_api is LB-FQDN for example — kube_api: master.itshellws-k8s.com
I use docker hub and ECR as a registry. That’s why my credentials are aws | docker.
REPO: nightmareze1/alpine
Now we are ready to launch the app — Edit one file in repo and commit changes.
Circle CI shoot the pipeline deployment and build the container.
Check the new svc named alpine and port — Test app in ‘IP:port’ of minion or LB.
My blog is an app running in kubernetes.
Thanks for reading this article. Hope you find it helpful 😃