Taking the Kubernetes Helm for the First Time
A Beginner’s Guide to Creating a Multiple K8 Deployment
If this is your first time joining me on my transition to tech, I am so glad you are here! My name is Melissa, but prefer to go by Mel. I am currently transitioning into the world of Cloud DevOps with the help of the Level Up in Tech Program. If you have been a follower, I want to say thank you for support and time! I hope you have enjoyed the projects I have brought to the table.
We are going to continue building our skills with containers by taking the helm and working with Kubernetes! Don’t worry, I will try to break it down so we have smooth sailing.
A little background //
Kubernetes is an open-source orchestration system used for automating deployment, scaling, and management of containerized applications.
Fun Facts:
- Kubernetes is often referred to K8, as there are 8 number of letters between the “K” and the “s”.
- Kubernetes creators, Beda and McLuckie inspired by “Star Trek” pitched an idea to Google that containers were the future. Resulting to it’s public release in 2014. Now K8 is a Cloud Native Computing Foundation (CNCF) project
- Kubernetes’ logo is a ship wheel. Often referred to as the helm. Kubernetes stems from the Greek word meaning “helmsman.” The logo shows the ship’s wheel having seven sides, in homage to the original name “Project 7,” again which referred back the creators inspiration from “Star Trek.”
Scenario //
A company needs to host multiple websites or web applications using a single IP address and port. Kubernetes could be the a cost-effective solution for smaller businesses that cannot afford to have separate servers for each website or application.
Objective //
- Create Two Deployments
- Deployment 1:
Contain minimum of 2 Pods running nginx image, as well as a custom index.html page displaying “Deployment One” - Deployment 2
Contain a minimum of 2 Pods running nginx image, with a custom index.html page displaying “Deployment Two” - Create a Service pointing both Deployments to same IP address and port number
- Validate using curl command
To follow along with this project you will need //
- Access to AWS
NOTE: This is not a Free-Tier Project - Visual Studio Code installed on OS
- Attention to Details
Ready? Clear skies ahead and smooth waters! Let’s begin with setting our foundation by creating our EC2.
Creating our EC2 //
Navigate to your AWS Console, then to your EC2 Dashboard.
- Select Launch instance
- Name your EC2
- Select Ubuntu 22.04 LTS
This time we are moving away from Free-tier, as K8 needs a min. of 2CPu and 2GB Ram to operate.
- Turn on All generations
- Use Search Bar and Type t3a
- Select t3a.medium
- Create a Key pair
- Select Edit by Network settings
The Network Settings section has opened up, we can now add custom Inbound rules.
- SSH traffic from your IP address
- All traffic from Anywhere
- Add the following Ports:
2377, 7946, 2376, and 4789 under Custom TCP sourced from your VPC CIDR Block.
- Select Launch instance
Steering ourselves towards our next step, we will be using Visual Studio Code. If you haven’t set up VS Code you can review this article for assistance.
In Visual Studio Code connect to EC2//
In VS Code, navigate to the lower left hand corner and select the two arrow icon. Once you click the icon a new remote window will open.
- Select Connect to Host…
- Select Configure SSH Hosts…
- Select the first path: C:\Users\yourusername\.ssh\config
This process should feel familiar if you followed along with our previous project . We will now update our .ssh config file with our EC2.
- Update Config → File → Save and close
- Navigate back to lower left hand corner and select double arrow icon
- Select Connect to Host
- Select EC2 Name
- Select Linux as we are using Ubuntu as our OS
- Select Continue
You should now be connected to your EC2.
Setting up our EC2 for Success //
In this section, we will be working on installing Kubernetes onto our EC2, as well as running a few commands to aid in productivity.
- Close Welcome message
- Open Terminal by selecting … on top menu
- Install Kubernetes by running:
sudo snap install microk8s --classic
The following command will allow us to modify our user, ubuntu, to be an administrator. I really wanted to try this after reading it on the attached Docker Documentation at the end of the article. I find that it will help be a little more efficient. Feel free to not run, and just note you will need to add sudo in front of commands.
sudo usermod -a -G microk8s ubuntu
Since we installed MicroK8’s the commands are a little different. It requires you to run ‘Microk8 kubectl’ vs just ‘kubectl’. To aid in efficiency can create an alias, to take the commands to kubectl.
echo "alias kubectl='microk8s kubectl'" >> ~/.bashrc
- Reboot system and log back into EC2
Our next steps to set up our EC2 will be to create a “kubectl config file”. If you have ventured away from your HOME directory, you can cd into it. We will be making a directory called .kube and placing our config file.
cd $HOME
mkdir .kube
microk8s kubectl config view --raw > $HOME/.kube/config
- Install CoreDNS in Kubernetes
sudo microk8s enable dns
- Verify the install
kubectl version --short
Creating Directory for Manifests //
I am a fan of organization, and anything that aids in keeping me productive. I find it is best practice to create a directory for the project currently working on.
- Create directory
mkdir wk18Deploy
- Change locations into new directory
cd wk18Deploy
Spinning up Deployment One //
Our first step in will be to create a YAML “Manifest” which will be a simple text file listing resources and environment variables.
- Select File From the Top VS Code menu
- Select New Text File from drop down menu
You will see that you can select a language, or select a template or simply start writing to dismiss the message.
- Select YAML
- File → Save As deploymentone.yml
(Make sure to choose the directory we created from the drop down menu)
In the resource section below, you can check out the Kubernetes Documentation for the sections for API terminology. Here is a brief breakdown of what we will be scripting for our deploymentone.yaml.
- Apps I chose to work with version 1
- All resource types have a concrete representation (their object schema) which is called a kind, we have set ours as Deployment. Followed by using nginx for name, as this is a deployment file to display an nginx image.
- Creating 2 Pods
- Port 80, by default is the nginx port that the web server uses to listen to all incoming traffic and connections.
- Added volume to be able to display a custom web page
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-index-html
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-index-html
configMap:
name: nginx-index-file
configMap:
name: configmap-one
- Deploy
kubectl apply -f <deploymentfilename>.yml
- Create a ConfigMap for Deployment 1
ConfigMaps keep application programing interface configuration seperate from the container image, allowing our application to be easily portable.
Repeating the same steps that we used to create our first YAMAL script, navigate to the menu bar to open a new text file for our ConfigMap.
- File → New Text File → Select YAML → File →Save As configmap1.yml
Script Breakdown:
- apiVersion: continuing to work with version 1
- kind is now set to ConfigMap with the name matching our deployment script above: configmap-one
- Data added custom index.html to display
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap-one
data:
index.html:
<html>
<h1>Deployment One</h1>
</br>
<h1>Wk 18 Mel Foster Deployment One</h1>
</html
- Create the ConfigMap
kubectl apply -f configmap1.yml
Awesome!! One deployment down, one more too go!
Spinning up Deployment Two //
Our goal is to create a second deployment with another 2 pods running nginx image. We will follow the same steps in creating a third YAML File, updating the code to make it unique Deployment Two.
- File → New Text File → Select YAML → File →Save As deployment2.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-index-html
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-index-html
configMap:
name: nginx-index-file
configMap:
name: configmapd2
- Deploy
kubectl apply -f <deploymentfilename>.yml
- Create a ConfigMap for Deployment 2
- File → New Text File → Select YAML → File → Save As configmap2.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: configmapd2
data:
index.html:
<html>
<h1>Deployment Two</h1>
</br>
<h1>Wk 18 Mel Foster Deployment Two</h1>
</html
- Deploy
kubectl apply -f <configmap2.yml>
Great job so far! Keep hanging in there we are almost done. To recap we should have created four YML or YAML files.
Let’s verify our pods are running before moving on to last portion. We should have four pods total, two for each service.
kubectl get pods
Creating Our Service //
It’s time to create a service that will point to both deployments, using the same IP address and port number. We will create another YAML “Manifest.”
- File → New Text File → Select YAML → File →Save As servicenginx.yml
Script Breakdown:
- apiVersion: continuing to work with version 1
- kind is now set to Service with the name being servicenginx
- app: nginx
- port: 80
apiVersion: v1
kind: Service
metadata:
name: servicenginx
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
- Deploy
kubectl apply -f <service_filename.yaml>
- Verify list of services
kubectl get services
Validating //
Wrapping up, we can now validate our project. Easiest way to do this is to see the index.html page is to run the curl command. We want to validate servicenginx, so we will curl the IP address listed under CLUSTER-IP.
curl <ip_address>
Note: When you curl you will receive the application front end on the pod.
Example: Currently we have 2 pods running. The service shoots traffic to both so over time, if you keep curling you will see your pods ID shift.
As you can see from the screen shot above, it’s not necessarily back and forth evenly but back and forth nevertheless.
Sweet! Let’s validate, again by using our EC2 Public IPv4 address and the Port from the NodePort service.
<EC2 Public IPv4 address>:NodePort
Congratulations on successfully taking the helm! It’s never easy weathering a new topic, but you did fantastic! Thank you for joining me on this project. Be on the lookout for upcoming projects surrounding Terraform.
Tips //
- Stop/Terminate any EC2 you no longer need
Helpful Resources //
Join me on https://www.linkedin.com/in/melissafoster08/ or follow me at https://github.com/mel-foster