Kubernetes Deployments!

Deploying Kubernetes Pods Using Deployments.

Devin Moreland
All Things DevOps
8 min readJul 28, 2022

--

What is Kubernetes

Kubernetes, sometimes abbreviated K8s, is a container orchestration service that was originally designed by Google to interface with Docker. This is no longer the case and Kubernetes can now run on multiple runtimes. Kubernetes can deploy, maintain, and scale applications in containers. This is a very difficult service to learn but can be very helpful in scaling infrastructure.

Nodes

Nodes in Kubernetes are where containers are deployed. There can be multiple nodes or one node running. Each node can have one or multiple pods.

Pods

Are where containers are running in Kubernetes. There can be one or multiple containers per pod, however, normally there is only one of each type of container per pod i.e. one container of Nginx per pod. There can be multiple pods in a node running Nginx however.

What is a Kubernetes Deployment?

Deployments are what tell Kubernetes to create or modify instances of the pods. Deployments are used to update the number of pods, the image on the pods, or even how many pods are running. Deployments are normally in the form of a YAML file however they can be created via the CLI as well.

Purpose

The goal for today is to create an NGINX deployment on Kubernetes and use some commands to show what Kubernetes can do. We will also create a deployment YAML file and use this to scale up our deployment. To start we need Kubernetes up and running on our machine.

How to install Kubernetes on MacOS

To install Kubernetes on Mac all you need to do is have Docker Desktop installed, head to the settings, and under the Kubernetes sections just click the checkbox Enable Kubernetes. Below are some simple Kubernetes commands

$ kubectl get pods
$ kubectl get deployments
$ kubectl get services
$ kubectl describe pods
$ kubectl create -f <file name of deployment/pod/service>
$ kubectl delete -f <file name of deployment/pod/service>

Create Nginx Deployment

To create a deployment via the CLI run the following code in your CLI. This will create a deployment, describe this deployment, show how many pods are running in our deployment, and then show the replica sets(we may not touch on this, but this is for my own notes).

$ kubectl create deployment --image nginx my-nginx
$ kubectl get deployment
$ kubectl get pods
$ kubectl get replicasets

This deployment deployed one pod running the Nginx service. However, right now we cannot view this website because we didn't expose it on any ports. We can verify this by running the following command. This will give us all the details of our deployment.

$ kubectl describe deployment <name of deployment>
Deployment Details

To view the logs of our deployment run the following command. These logs can be used to see what is happing in our deployment. We will use these more later.

$ kubectl logs deployment/<name-of-deployment>
kubectl logs deployment/name

Finally, let's delete our Deployment so we can do the next part of our project. To delete a deployment run;

$ kubectl delete deployment <name of deployment>

Using Deployment YAML Files

Now let's create the same deployment but this time we will create a deployment.yaml file.s

  • Create a new folder named K8s
  • Change directory in K8s
  • Create a file named deployment-nginx.yaml
  • Insert the following text into this file
Deployment-nginx.yaml

The sections here tell Kubernetes what we need.

apiVersion: for a deployment is always apps/v1

kind: This specifies our kind, so this kind is a deployment, later we will use a Service and a Pod. essentially it is what we are creating.

metadata: Is the name of this deployment, or pod, or whatever we are creating, along with labels. Labels are like tags if you have AWS experience.

spec: is where all the stuff happens. This tells our container we are creating a template named Nginx with a label, then we are creating containers in this named Nginx using the DockerHub image of Nginx.

selector: says that anything matching the label app: myapp is included in this deployment.

replicas: is how many pods we are creating, in this case, 4.

After you have your new test file saved run the following commands. The first one will create the deployment and the others will do the same as above.

$ kubectl create -f deployment-nginx.yaml
$ kubectl get deployments
$ kubectl get pods
$ kubectl logs deployment/myapp-deployment
kubectl logs deployment/myapp-deployment

Let's show the power of the Deployment YAML file! Let's update the number of containers that will be in our deployment.

  • vim into your deployment-nginx.yaml file
  • Change the number of replicas to 4
  • Save your file and then run the following. This will delete our deployment and then restart it with 4 new pods!
$ kubectl delete deployment myapp-deployment
$ kubectl create -f deployment-nginx.yaml
$ kubectl get pods
kubectl get pods

So to show you how K8s works let's delete a pod and then run another get pods and we will have a new pod provisioned.

$ kubectl delete pods <pod name>
$ kubectl get pods
kubectl get pods

There is an alternative to deleting our deployment, however, because we may not want our customers to lose service. So instead of deleting and restarting, we can run the following command and it will update our containers for us.

$ kubectl scale deployment myapp-deployment --replicas=2
$ kubectl get pods

This will scale us down automatically and then we will now have two pods!

Let's delete everything and make this even more advanced.

$ kubectl delete deployment <deployment name>

Multicontainer Pod

For the final part of our project, we will deploy a multi-container pod using a deployment YAML file. We will also expose our Nginx webserver on port 80 so we can access it! We can expose our port using the Kubernetes Service feature, this will allow us to access our webserver.

To start we need to provide some containers, let's create a multi-container pod(not a deployment this time) by building a new YAML file.

  • Create a file named deployment-multi.yaml file
  • insert the following text
  • Save this file
  • Create pod
$ kubectl create -f multi-depo.yaml
$ kubectl get pod

This file will create a single pod with two containers in it, one with Nginx running and one with Debian running. We also created a volume named html that will be shared across both containers. Our Debian container will write into this volume and our Nginx container will read from this volume

Volumes: in this file create a new volume in the pod. Then our volumeMounts in the containers will allow us to choose the name of the volume(html) for our containers to mount too and then the path is where the file we are working out of. Then we inserted a command that will put the test “Hello from us through the Debian container” into an index.html file in the shared volume and this is where our Nginx container will read from.

Let's verify that our containers are sharing this volume and that our index.html file changed in Nginx. We are going to do this by opening a shell into the container. Once in the shell we will update the container and install curl, and use this to curl our container.

$ kubectl exec --stdin --tty myapp-deployment -- /bin/bash
$ apt-get update
$ apt-get install curl procps
$ curl localhost

So our text changed in the index.html file! Awesome so we know that our containers are both using the same shared volume.

Service Definition

Kubernetes Service will listen to a port on the node and forward a request to the pod running an application. This is known as a node-port service. There are three ports when using this Service. The target port(pod port) the service port and the node port(30000–32767) This requires a service-definition.yaml file.

Now it is time to create our service definition file. This Yaml will allow us to be able to access our ports over the internet. Without it, we cannot access our containers.

  • Create a file named service-definition.yaml
  • Insert the below code into this file
  • Save it, run the following
$ kubectl create -f multi-depo.yaml
$ kubectl create -f service-definition.yaml
$ kubectl get services
$ kubectl get pods

This will create our pod, along with our service. Our service is looking for pods with the labels myapp, this is chosen in the selector section of the YAML. So any pods with the labels myapp will be assigned this service definition. You can view this when you run the kubectl describe pods command. Also, the get command will show our service. This is important because this is where we get our cluster IP and our ports. So take that NodePort IP and type in the IP following the port number to see your application, 10.98.170.123:30004.

kubectl get services

Since we are doing this on Mac however we need to type localhost:30004 into our browser in order to see our webpage!

After you paste that into the browser, then you should be redirected to your NGINX website!

ERRORS!

I ran across an issue with my multi-container pod. It kept getting a NotReady error. So I did some googling and found this article.

You can run this following command and it will show details of the pod. I also found the culprit here.

$ kubectl describe pods
kubectl describe pod

My Debian container was not ready, so I did some googling and ran across this article. Also, I looked in my description and found the culprit, it wasn't that there was an error, the OS was doing the command I gave it, and then it was shutting down like it was supposed to do which was insert a new index.html file into the shared volume.

kubectl describe pod

Conclusion!

To wrap up, we started off by explaining on a high level what Kubernetes is. Then we created a deployment that runs Nginx via the CLI. Next, we created another deployment with 4 Nginx containers using a YAML file. Finally, we created a multi-container pod that shared a volume with two containers in this pod then we learned about services and how we can use them to open up our pod to the internet.

--

--