Deploying A Multi-Container Pod to A Kubernetes Cluster

Katoria Henry
Geek Culture
Published in
9 min readJan 12, 2023

If you’re in DevOps, or aspiring to be in DevOps, I’m pretty sure that by now you’ve heard someone mention the term “Kubernetes”, or possibly K8s. But what exactly is Kubernetes? In its simplest, most straight-forward definition, Kubernetes is basically the cloud native magic that makes it super easy to deploy containers that use various runtimes (though Docker is the most common), although the runtime is technically not part of Kubernetes. Okay it’s not really magic, but it’s used to automate, scale and manage deployments of containerized applications. When we speak of containerized applications, we’re referring to simply wrapping software processes and micro-services in another software solution (container), to make it executable.

Because Kubernetes gives you the ability to deploy containers across pools of resources, you’re able to quickly get the containers up and running on the resources, including spending up multiple replicas of the application. Kubernetes also provides you with configuration management capabilities in that it can pass specific configuration details to your containers. Outside of the basics of “managing” the containers, Kubernetes also provides a framework for managing network communications for the containers, allowing the containers to communicate through the local host.

Now, we can’t dive too deep into Kubernetes without discussing Pods, Clusters, Worker Nodes, and the Kubernetes API, all of which are managed by the Control Plane. Kubernetes deployments typically run a single deployment per pod. A pod, also represented as a Kubernetes object, is a group of containers that contains a set of linux namespaces, cgroups, and other things of isolation that specify how to run containers. Clusters are collections of multiple machines that run the containers. Clusters basically use a set of nodes to run the applications that have been containerized.

Worker Nodes, in turn, simply run the containers within the cluster. A worker node monitors the overall state of the container on the node, and reports this information back to the container. And finally, we have the Kubernetes API, which is also known as the central communication point and the primary user interface that allows users to query and manipulate objects, essentially controlling the cluster. Additional information regarding API security for authentication and authorization can be found here.

Now that we have that high-level overview of Kubernetes, Pods, Clusters, Worker Nodes and the Kubernetes API out of the way (it’s so much more to Kubernetes btw), let’s dive into this tutorial of creating a multi-container pod that will be running multiple containers!

Resources/PreReqs:

  • As always, Confidence to get it done!
  • Docker Engine /Desktop Installed
  • Kubernetes installed
  • Basic understanding of Kubernetes commands
  • Nano or Vim (Text Editors)
  • Familiarity with YAML
  • Basic understanding of Linux command input/output
  • MacOS Terminal, Windows Command Prompt, or Linux Shell
  • Source Code Editor (I used Visual Studio Code (VS Code))

Step 1

For this tutorial, we will be using kubectl, which is a command line interface (CLI) for managing operations on your Kubernetes clusters. It essentially allows you to view, create, modify, and delete Kubernetes Objects. To get started, you can either use your basic OS Terminal, or execute the commands for this project using a different IDE, starting with the steps below:

  • Start by confirming that you have Docker Desktop installed and Kubernetes is running on Docker Desktop. To get the cluster state, let’s try the command kubectl cluster-info, which should provide a URL to confirm kubectl is correctly configured to access your cluster:
*Note: We did not have to set up kubeadm since the cluster is up and running via Docker Desktop. Visit https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ to install if using a Debian or Red-Hat based distribution*
  • If we’d like to check the resources (objects of a certain type) that are available, we can run the command kubectl api-resources, and the output should show us what’s available:
  • Although not totally necessary if you’re using Kubernetes with Docker Desktop, as a best security practice, we can create a service account that includes role-based permissions, by typing the command kubectl create serviceaccount <yourserviceaccountname>:
  • You’ll then need to create a yaml file for the role, by typing the command vi <rolename>.yml, and your file should look similar to this:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: list-pods-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
  • Once you’ve created the file, type the command kubectl apply -f <rolename>.yml, and you should see the newly created role:
  • To connect the service account to the newly created role, type the command vi list-pods-rb.yml (rb=rolebinding), and your yaml file should resemble this:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: list-pods-role
subjects:
- kind: ServiceAccount
name: new-sa
namespace: default
roleRef:
kind: Role
name: list-pods-role
apiGroup: rbac.authorization.k8s.io
  • We can then enter the command kubectl apply -f list-pods.yml to create the role binding:
  • Next, let’s run the command kubectl get pods -n kube-system to see which back-end pods we currently have (this is also an example of interacting directly with the API to retrieve this information from the control plane):
  • We can now check to see if we have any nodes and pods running on the front-end, by executing the commands kubectl get nodes followed by kubectl get pods, and your results may look similar (I’m using a different laptop that has a fresh version of Docker and Kubernetes installed):

Step 2

Since we will be creating a multi-container Pod that runs Nginx and Debian containers, we have to create a manifest file (yaml file) that includes the specifics that we will need for our Pod. The containers will share the same network namespace, including the IP address and network ports. When creating your Pod, be sure that you include the following as these are hard requirements: apiVersion (Kubernetes API version), kind (the kind of object being created), metadata.name (uniquely identifies the object), spec (the objects’ desired state).

  • To begin, you can either run the commands touch <yourobjectname>, followed by nano <yourobjectname>.yml or vi <yourobjectname>.yml . I’ve already created my file, but your file should also resemble the formatting below:
apiVersion: v1
kind: Pod
metadata:
name: shared-storage
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /data
command: ["/bin/sh"]
args: ["-c", "echo Level Up Blue Team! > /data/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: podnode
spec:
type: NodePort
selector:
app: nginx-container
ports:
- port: 80
nodePort: 30010
  • Once you’ve created your file, you can type the command cat <yourfilename>.yml to ensure you’ve typed everything correctly. You’ll notice in my yaml file, the two containers will be sharing the same volume. The Debian container will serve as the sidecar container and write “Level Up Blue Team” to the volume, and then exit. I have also added in additional information for the NodePort to ensure it’s exposed externally, in which we will run a special command to ensure the port information is accurate:
  • Next, we’re going to deploy this pod, by executing the command kubectl apply -f <yourobjectname>.yml
  • We can check the status to verify the pod was created by typing the command kubectl get all. You’ll notice that my status was originally listed as “not ready”, so I waited a few minutes before running the next command:
  • We can then verify the state of the pod by running the command kubectl get pod <podname> --output=yaml, and you’ll get a ton of output, similar to this if the pod is up and running (which will show the date/time as the “startTime”):
  • If you’d like to see the pods in your namespace and their resource usage, run the command kubectl top node and kubectl top pod -A, as shown below:
  • To verify the NodePort service is accurate for the Nginx Container, let’s run the command kubectl get services, and your output should look like this:

Step 3

Our next steps will entail us verifying that the Debian container has written to the Nginx container, as documented in our yaml file. The Nginx container should still be running at this point, so we will proceed with the below commands to test it out:

  • Type kubectl exec -it shared-storage -c <containername> — /bin/bash to log into the pod as *root* and run commands
  • Now, to get the expected response from the index.html message provided, let’s execute the command curl localhost, and the results should populate a webpage with simple text. Be sure to type “exit”, to exit out as the root user:
  • To get additional details about our pod, we can run the command kubectl describe pod <podname>, and the output should provide information for both containers that are listed in our yaml file, similar to what we see below:
  • Because we’ve been doing quite a bit since creating our pod, let’s fetch some logs using the command kubectl logs <podname> -c <containername>, and you’ll notice that the logs show the date and time of processes starting, entry-point information, etc. You can run this command for both containers to get the expected log output as shown below:
  • If we’d like to confirm that the containers were in fact created using kubectl (although we know they were), we can visit our Docker Desktop and we can see the containers and images there as well:

Step 4

Now that we’ve successfully verified that we have an active cluster, created a yaml file that included a pod deployment for that cluster, and verified the logs of the containers, it’s time that we test the webpage using our browser to do one last sanity check that everything worked out well with our deployments. Let’s wrap things up with the final steps below:

  • In your browser (I had to use Incognito mode), type localhost:<NodePort>, and you should see the following webpage, which means we have 100% success for this tutorial!
  • If you’d like to remove the pod that you’ve created (though technically you don’t have to), you can run the command kubectl delete pod <podname>, and there should be a message that populates to show the pod was deleted.

And that just about wrap things up for this tutorial. That was so much fun, don’t you think 😊?! As always, thanks for stopping by!!

👉🏽 Follow me on LinkedIn, and @theCaptN21 on GitHub!

--

--

Katoria Henry
Geek Culture

Platform Engineering | DevOps | Chaos Engineering | Cloud Engineering | High Availability | Cloud Security | 👉🏽 https://www.linkedin.com/in/katoria-henry-2018