All Things DevOps
Published in

All Things DevOps

Kubernetes: Deployments and Multi-Container Pods

Meet your new helmsman…

Kubernetes is an open-source orchestration tool that deploys containers at scale. It works well in line with microservices and multiple cloud providers, making it one of the fastest-moving projects in open source history. Kubernetes eliminates many of the manual processes involved when deploying and scaling containerized applications. Basically, Kubernetes helps you efficiently manage clusters of hosts running web servers, databases, Linux distributions, and more. As far as deployment speed, ease of delivery, workload portability, and being a good fit for the DevOps work cycle goes; Kubernetes is king.

I am running this project on Windows 10, with Docker Desktop v1.10.1, and an Ubuntu 20.04.4 LTS (Focal Fossa) subsystem. I will be working inside my Ubuntu CLI using Vim for edits, but I do have VS Code on the side as well.

1. Create a deployment that runs the Nginx image from the CLI
2. Display the details of this deployment
3. Check the event logs from the deployment
4. Delete the deployment

Let's first ensure that kubectl is configured to talk to your cluster by running thekubectl version --output=yaml command. This will display various information about the client and server versions you are working with…

Next, let's view our nodes inside the cluster with the kubectl get nodes command. Kubernetes will choose where to deploy our application based on node available resources. We can also ensure that nodes are in a ready status…

In my case, I only have the single node, but it is ready to go

Now we will create our application with the kubectl create deployment command. You must provide a deployment name and an application image. I will pull an image from the https://hub.docker.com/_/nginx official images using a supported tag. If you click on the provided link, it will take you to the official Nginx image website showing you all build information and supported versions…

(TIP: When pulling from DockerHub, there is no need to specify a repository URL. If pulling an image from a repository outside of DockerHub, you would need to include the full repository URL.)

I will be using the v1.22.0 stable version of Nginx. It is a good idea in production to ALWAYS specify a version to avoid compatibility troubleshooting down the line. So best to just get used to it. Run the following command to create the deployment…

With that, you just deployed an application by creating a deployment. A few things happened running this command…

  • Searched for a suitable node where an instance of the application could be run (my one available node)
  • Scheduled the application to run in that node
  • Configured the cluster to reschedule the instance on a new node if required

Lets run kubectl get deployments to verify further…

Here you can see my deployment with the name: nginx-deployment is running on a single instance inside a docker container on my node. We can even have a quick look at Docker Desktop for a view at the GUI…

Now lets display the details of the deployment with kubectl describe deployments nginx-deployment

Here, you see the deployment, pod, and container configuration information such as (labels, creation timestamp, ports, image, pod state, replica sets, and so on…)

Next, we will check the event logs in the deployment with the kubectl logs -l app=nginx-deployment command…

(Here I am using the kubectl logscommand with the -l or ‘Selector’ filter on to specify the deployment with the label app=nginx-deployment I just created. You can see it if you look back at the details of the deployment under the fields Labels: and Selector:

Simple log output for the test deployment that has started running

Now that we have created a deployment, described the deployment, and found logs of the deployment, let's delete the deployment with thekubectl delete deployment nginx-deployment command…

Then one more kubectl get deployments to verify it was removed…

1. Create the same deployment using a YAML file
2. Display the details of this deployment via the command line
3. Update the YAML file to scale the deployment to 4 Nginx containers
4. Verify the change via the command line

We are going to create the same deployment we created from the command line, only with a deployment configuration file in YAML format. With a deployment configuration file, it is easier to make changes to your deployment in the future.

An easy way to get started is to run the same command we ran earlier to run our deployment, only this time with a --dry-run=client -o yaml filter applied. This will allow us to run the command, but not actually initialize a deployment. It will just give us the YAML configuration output of what the deployment would look like, and create a YAML configuration file in our current working directory.

Run the command kubectl create deployment nginx-deployment --image=nginx:1.22 --dry-run=client -o yaml > nginx_deploy.yaml

You won't get any confirmation output from this command, but if you do a quick ls -l you will see a new YAML configuration file in your current directory…

Then cat nginx_deploy.yaml to have a look at the contents…

Let's go over the four required fields that MUST be in a YAML configuration file to function:

This is the field being used to create the Kubernetes object being defined, and it refers to the API. In Kubernetes, there are various API’s that enable you to create differing Kubernetes objects, e.g., apiVersion: v1 contains many of the core objects and is considered to be the first stable release by Kubernetes. Another, which we have in our current YAML configuration file, is apiVersion: apps/v1 which adopts objects from v1 and provides critical functionality such as Deployments and ReplicaSets (hence why ours came out as apps/v1, since we are creating a deployment).

The kind field allows you to specify which type of object in Kubernetes you wish to define. Objects specified in this field will be linked to the apiVersion that you specify since it’s the apiVersion field that lets you access the different kinds of objects and their specific definitions. Some objects you can define in the kind field are Pod, Deployment, Container, DaemonSets and so on. Our kind is a Deployment.

The metadata field provides unique properties for the object specified. It could include the name, namespace, timestamp, labels, etc. The values specified in these fields provide context for the object, and they can be referred to by other objects. Really this field just allows you to specify the identifier properties of the object.

The spec field is where the action takes place. This field allows you to define what you expect your object to do. It defines the operation of an object with key-value pairs. Much like the object itself, the specifications of the object depend on the apiVersion specified before. So, different apiVersions may include the same object, but the specs that can be defined will likely be different. Here you can define what ports you’d like open on the container, how many replicas you want, protocol types, names, image types, etc…

Now that we have a better understanding of the YAML configuration file, lets deploy it with the kubectl apply -f nginx_deploy.yaml which will run your YAML configuration file and create the deployment…

Let's describe the deployment again with kubectl describe deployments nginx-deployment

Looks identical to the first deployment we described

Now let's make a small change to the YAML file. We will simply scale the deployment to 4 Nginx containers. To do this we will change the number of replicas to 4. Run vim nginx_deploy.yaml, press ito enter --INSERT-- mode, and change the replicas: field under spec: from 1 to 4…

Once you’ve made the change, ESC out of --INSERT-- mode, press :w, and lets save this as a new version called nginx_deploy_v2.yaml…

Hit ENTER, then :q! and your new version of the YAML file will be created…

A quick ls

cat your yaml files to verify the changes

Now that we have our new version of the YAML file, lets deploy the changes with the kubectl apply -f nginx_deploy_v2.yaml command…

Slight change here…instead of seeing deployment.apps/nginx-deployment created we see deployment.apps/nginx-deployment configured since we are only updating the deployment we already had running…

Let's quickly verify with the kubectl describe deployments nginx-deployment command again…

Changes have been highlighted

Here you can see the changes that were made to our deployment. A quick run of kubectl get pods will show us our four running Pods…

While we’re at it, how about a look at our Docker Desktop, because looking at it from the GUI is satisfying…

Let's run kubectl delete deployments nginx-deployment to have a clean slate for the next section. Don’t worry you can boot that back up again with no problem with your new YAML configuration file. *(Make sure to be saving it to GitHub)*

1. Create a multi-container Pod that runs Nginx and Debian containers
2. Expose port 80 in Nginx container
3. Mount a directory to the Nginx container so it is available inside the container
4. Create a NodePort service using port 80

A little bit about Pods and multi-container use cases before we begin. A Pod is the smallest unit you can deploy in Kubernetes. If you need to run one container, you need to create a Pod for that container…but, a Pod can contain more than one container, and this will keep the multiple applications running inside the Pod tightly coupled together. Think of the Pod as a single server and each container can access the other containers in the Pod using different ports on the localhost. This will hopefully make more sense once we finish this section.

First I will create a new file called mc_pod.yaml

Then I will open up the file in Vim and input the following code…

This is a new file running off of apiVersion: v1 and our kind: this time is Pod. Under the metadata I’ve given it the name mc-pod. In the spec: field I have defined a volume called html and the type is an emptyDir which will be created once a Pod is assigned to a node. The first container is running an Nginx 1.22 webserver, it has a shared mount to the /usr/share/nginx/html directory, and I have also exposed port: 80 (which will come into play later).

The second container uses a Debian.v11 image and has a shared volume mounted to the directory /html. In the args: field, there is a small script that adds the current date and time every 30 seconds to the index.html file located in the shared volume. When a user makes and HTTP request to this Pod, Nginx will read this file and transfer it back to the user in response.

Let's run this YAML file with kubectl apply -f mc_pod.yaml

Quickly ensure your pods are running with kubectl get pods

And why not a quick look at the GUI…

We can verify that our Pod is working by cat’ing the index file inside our Nginx and Debian containers with the following kubectl exec commands. The exec command allows us to inspect and debug our applications by executing commands inside our containers. We can even enter our containers with a bash shell and work inside directly. The following commands tells Kubernetes the name of the Pod we want (mc-pod), the name of the container we want(nginx1/debian2), and uses the cat command to output the contents of the index file inside each container.

kubectl exec mc-pod -c nginx1 — /bin/cat /usr/share/nginx/html/index.html

As you can see, the date every 30 seconds is being displayed from the index file

kubectl exec mc-pod -c debian2 — /bin/cat /html/index.html

Same message every 30 seconds

For the final part of this project, we will create a NodePort Service for our multi-container Pod. Basically, a Service kind:, enables network access to a Pod or set of Pods in Kubernetes. A Service selects Pods by labels. When a network request is made to the Service, it will select all Pods in your cluster that match the Service’s selector. NodePort is great but comes with some limitations. One, you need to track which nodes have Pods with exposed ports. Two, you can only expose one service per port. And lastly, NodePort ports are only available in the range of 30000–32767.

Lets update our multi-container Pod YAML file…

First, at the bottom of your current mc_pod.yaml file we will place --- which in YAML format, lets you manage multiple resources in the same file. It is essentially like creating a new YAML file within a YAML file.

Now add the following Service. You can see that this Service is a NodePort type, that has selected our nginx1 container on port 80, and creates a port for the node on 30077. Feel free to change the port if you like.

The file should look like this by the end…

Lets run the file with kubectl apply -f mc_pod.yaml

Verify services with kubectl get services

As you can see the service was created and the ports are as we specified in the YAML file…

And now, for our final test, lets head over to our web browser and enter localhost:30077

Success!

With that, we have completed all the objectives in this project. A quick recap of what we have done…

  • -Created a deployment that runs the Nginx image from the CLI
    -Displayed the details of this deployment
    -Checked the event logs from the deployment
    -Deleted the deployment
  • -Created the same deployment using a YAML file
    -Displayed the details of this deployment via the command line
    -Updated the YAML file to scale the deployment to 4 Nginx containers
    -Verified the change via the command line
  • -Created a multi-container Pod that runs Nginx and Debian containers
    -Exposed port 80 on the Nginx container
    -Mounted a directory to the Nginx container so it is available inside the container
    -Created a NodePort service using port 80

There is SO MUCH to learn about Kubernetes and this is only the very beginning and barely scraping the surface. But, I really do enjoy my time learning about this powerful tool. I hope this tutorial was able to help you in anyway.

Please feel free to leave comments, suggestions, praise, or even throw tomatoes at me. Thanks and see you on the next project!

Find me @ https://www.linkedin.com/in/dansantarossa/

#tutorial #kubernetes #docker #containers #pods #yaml #levelup #devops

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store