⚓Two K8s Deployments, One Service, and a Persistent Volume⚓

Adam Leonard
Nerd For Tech
Published in
8 min readFeb 13, 2023

What is Kubernetes Deployment?

A Kubernetes deployment is a resource object in Kubernetes that provides declarative updates to applications. A deployment allows you to describe an application’s life cycle, such as which images to use for the app, the number of pods there should be, and the way in which they should be updated.

What is a Config Map?

ConfigMaps are used to separate application code from the environment-specific configuration. With ConfigMaps, there’s no need to hardcode the configuration data in the Pod specification. You can change the configuration settings during runtime and create complex configurations with ease.

What is Persistent Storage?

A persistent storage volume is a piece of storage in a cluster that an administrator has provisioned. It is a resource in the cluster, just as a node is a cluster resource.

Prerequisites

⚓AWS Cloud9 or similar IDE

⚓Operational Kubernetes Cluster (1 Control, 2 Worker)

⚓Basic Linux and Command Line Knowledge

⚓Familiar with Vi, VIM, or a similar editor

Below is a link to my GitHub to help with setting up a working K8s cluster using Cloud9.

Objective

  1. Create two deployments, each containing two pods running nginx.

2. Include a ConfigMap that points to a custom index.html page for each ConfigMap that contains the line “This is Deployment One” or “This is Deployment Two.”

3. Create a service that points to both deployments. You should be able to reach both deployments using the same IP address and port number.

4. Use the curl command to validate the index.html pages from both Deployment One and Deployment Two.

5. Create a persistent storage manifest with 250MB of disk space in the /tmp/k8s directory. Add the persistent storage claim manifest file.

*NOTE*👀

After completing this project I learned a few things I should have done differently:

🚫First, the ConfigMap should always be created before the Deployment. I did this in reverse order. If this is done, the deployment will not work correctly. I updated a portion of my deployment and re-applied it, so it still worked for me.

🚫Second, multiple YAML files can be added together into a single YAML file.

This is part of the learning process and I am happy my deployment worked, but also excited I was able to learn why it worked and how I can improve on it.

Time to get started

Step 1: Create a Config Map for each Deployment

A config map needs to be created for each deployment. This will instruct the deployment to produce a custom HTML file informing the user they are accessing Deployment One.

apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-deployment-config-1
data:
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>Welcome to LUIT!</title>
</head>
<body>
<p>This is Deployment One</p>
</body>
</html>

Create another YAML file to inform the user they are connected to the second deployment nodes.

apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-deployment-config-2
data:
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>Welcome to LUIT!</title>
</head>
<body>
<p>This is Deployment Two</p>
</body>
</html>

Apply both of the config files. Use the command to get all running ConfigMaps.

kubectl apply -f <config-file.yml>
kubectl get cm -o wide

Step 2: Create Two Separate Nginx Deployments of Two Pods

The first deployment will consist of two pods running nginx. The deployment can be created in the CLI or with a YAML file. Two replicas, running nginx, with a volume mount for the config map.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-one
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-deployment-config-volume-1
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-deployment-config-volume-1
configMap:
name: nginx-deployment-config-1

Next, we will create the second deployment with the same parameters.

The Deployment YAML file is similar but the name in the metadata needs to be updated to reflect the name of the pod.

Both new deployments are up and running. We can check the status with the following command:

kubectl get deployments -o wide

🔼Above both deployments are displayed. Below we can see how both deployments have 2 pods each, spread across both worker nodes.🔽

Step 3. Create a Service pointing to both deployments.

Open up a new YAML file for the Deployment Service. This service will utilize a NodePort to allow the deployments to reach the internet.

apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 32000

Apply the YAML file to start the service

Use the curl command to ensure we can see both HTML pages for each deployment

curl <Service IP>:<Service Port>
Since I am using AWS Cloud9, each of these IPs is the IP of each EC2 instance the worker nodes are on

Check the Public IP address of each worker node the deployments are running on.

Below is one of the Public IPs of a worker node. Note the listed IP and port. This image is shown by connecting to deployment one, and showing the text for each deployment.

I also snuck in a custom HTML image on each ConfigMap for each deployment to help differentiate the deployments.

Note the same IP and port as the above image, but now were are connected to Deployment Two. This ensures our service is working as intended.

Step 4. Set Up a Persistent Volume for both Deployments

Persistent Storage is beneficial because of its ability to remain even after the associated pods are removed. After restarts and pod removal, this volume will still remain with the associated data.

Create our Storage Class YAML File. A storage class will be created automatically, but to make the Volume expansion available, a YAML file needs to be created to identify it as true.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: localdisk
provisioner: kubernetes.io/no-provisioner
allowVolumeExpansion: true

Save the text editor file, apply the YAML file, and view our newly created storage class.

kubectl get sc

Next, we will create the Persistent Volume YAML file. Name it for the metadata and use the local virtual storage. The reclaim policy is set to recycle. This allows the volume to be deleted but can be restored later.

Set the storage capacity to 1 Gigabyte and the volume will be stored in the tmp directory via the hostPath.

kind: PersistentVolume
apiVersion: v1
metadata:
name: per-vol
spec:
storageClassName: localdisk
persistentVolumeReclaimPolicy: Recycle
capacity:
storage: 1G
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/k8s

Save this editor file, apply the YAML file, and check out our persistent volumes to ensure it's working.

kubectl get pv

Create a Persistent Volume Claim so the Volume knows where to bind it. We will give the claim less than the total amount of our volume to ensure we have the applicable resources.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: per-vol-claim
spec:
storageClassName: localdisk
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250M

Verify the Persistent Volume and the Volume Claim have been bound to each other.

Lastly, we need to create a new pod that will test our volume by sending the output of an echo command to the directory we specified the volume. The restart policy will be set to never. After this command is run, we will no longer need the pod.

apiVersion: v1
kind: Pod
metadata:
name: nginx-deployment-three
spec:
restartPolicy: Never
containers:
- name: nginx
image: nginx
command: ['sh', '-c', 'echo LUIT Success! > /tmp/k8s/success.txt']
ports:
- containerPort: 80
volumeMounts:
- name: pv-storage
mountPath: /tmp/k8s
volumes:
- name: pv-storage
persistentVolumeClaim:
claimName: per-vol-claim

The pod shows completed and not running because we set it to run the command of sending the success text and not restart or repeat the command. So the status will only show “Completed” and not “Running.”

Move over to the worker node the volume is mounted on to verify the file. Use a simple cat command to output the contents of the file.

If you made it this far, Congratulations on successfully completing this walkthrough.

A lot of moving parts to get to this point. I hope my breakdown helped explain this project. Thank you for reading!

Please give me a follow on Medium and feel free to connect with me on LinkedIn https://www.linkedin.com/in/adamcleonard/

--

--