More Kubernetes: ConfigMaps and PersistentVolumes

Dahmear Johnson
Nerd For Tech
Published in
9 min readFeb 9, 2023

As we continue diving into the world of Kubernetes☸️, we are presented with various features at our disposal to help automate and manage our container deployments. In this blog, we will discuss ConfigMaps and PersistentVolumes, and perform a simple walkthrough on how we can incorporate them into our deployments.

What are ConfigMaps? 🤔

ConfigMaps are Kubernetes API objects that allow developers to pass configuration data in a key-value pair format to pods during deployment. ConfigMaps decouples the process of setting configuration data within your application code. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume. In this walkthrough, we use a ConfigMap to rewrite and pass customized HTML code to the index.html file in /usr/share/nginx/html/.

What are PersistentVolumes? 🤔

A PersistentVolume (PV) is also a Kubernetes API object that enables you to abstract details of how you provide storage and consume it. To put it another way, PVs allow developers to separate the storage resource implementation details from the storage access method defined by the user for pods. ☝🏽

PVs have an independent lifecycle outside the pod it is bound to, providing more granular control of this persistent storage type. PVs consist of three components: StorageClass, PersistentVolume, and PersistentVolumeClaim. As we move along in this walkthrough, we will cover the StorageClass and PersistentVolumeClaim API objects in more detail.

Walkthrough Objectives:

  • Deploy two Deployments with two Pods, each running the nginx image.
  • Include ConfigMaps that point to two different custom index.html pages.
  • Create a service that points to both deployments. Confirm reachability to both using the same IP address and port number.
  • Create a persistent storage manifest that gets approx. 250MB of disk space on the host node in the /temp/k8s directory. Then, create a corresponding persistent storage claim manifest file.

Prerequisites:

  • 3 Linux VMs that are publicly accessible from the Internet with Kubernetes installed and configured.
  • 1 Control Plane and 2 Worker nodes.
  • A regular user account with SSH remote access.
  • Familiarity with a terminal text editor and basic Kubernetes principles.

Step 1: Create the ConfigMaps

First, we are going to SSH into our control node, create a YAML file named “nginx-configmaps.yml”, and enter the following code:

apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap-v1
data:
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>Welcome to Nginx!</title>
</head>
<body>
<p>This is Deployment One</p>
</body>
</html>
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap-v2
data:
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>Welcome to Nginx!</title>
</head>
<body>
<p>This is Deployment Two</p>
</body>
</html>

📝NOTE: Be sure to save your changes!

In the code above, we are creating our two ConfigMaps with customized HTML code. The key, “index.html”, represents the file we are creating in the directory path that will be specified later. What we expect to see in this walkthrough is when we curl a Worker Node by its NodePort, our connections should cycle between the two pods. This will result in us seeing either “This is Deployment One” or “This is Deployment Two” in the output.

Once the file has been created, run “kubectl apply -f nginx-configmaps.yml” to deploy.

kubectl apply -f nginx-configmaps.yml

We can view the details of our ConfigMaps by running the following:

kubectl get cm
kubectl describe cm <configmap name>

Step 2: Configure Pod Deployments

In this step, we will create a Deployment YAML file named “nginx-deployments.yml” that will deploy 2 deployments with 2 replica pods each, running nginx. We will also create volumes that map our ConfigMaps to the specified mount path “/usr/share/nginx/html” on each container.

Create “nginx-deployments.yml” and enter the following code:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-v1
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-v1
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config-volume-v1
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-config-volume-v1
configMap:
name: nginx-configmap-v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-v2
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-v2
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config-volume-v2
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-config-volume-v2
configMap:
name: nginx-configmap-v2

Next, run “kubectl apply -f nginx-deployments.yml” to deploy our deployment:

kubectl apply -f nginx-deployments.yml

Verify our deployments have been created successfully:

kubectl get pods -o wide

We have verified that both of our deployments have deployed 2 pods each to the available worker nodes; k8s-worker1 and k8s-worker2. 👍🏽

Step 3: Create NodePort Service

To view the custom HTML pages externally from outside the cluster, we can create a NodePort service. A NodePort service allows us to expose our containerized application/service on each Node’s IP as a static port.

📝NOTE: In the real world, you may want to deploy an Ingress object and Controller to expose HTTP/HTTPS routes from outside the cluster to a service within the cluster. Ingress objects and Ingress Controllers are outside the scope of this blog and I recommend reviewing official documentation at kubernetes.io.

Create a file named “nginx-service.yml” and enter the code below:

apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 32000

This simple NodePort service configuration will reserve TCP port 32000 on each node a part of the cluster and map to TCP port 80 on our containers. We also use the selector “app=nginx” because that is what we specified in our deployment config. This will ensure that the service maps to the deployment accordingly.

Apply configuration:

kubectl apply -f nginx-service.yml

View NodePort service:

kubectl get svc -o wide

If we open our browsers we should be able to connect to the public IP of one of our nodes on port 32000.

After refresh

📝NOTE: I did notice in my case, I had to open another browser window to ensure that the service would map to both deployments successfully. Simply refreshing the browser did not cause the NodePort service to cycle between the two deployments with customized HTML. I am still in the process of learning why this is the case. However, I wanted to share my observation for transparency. 😊

Another example of curling our pod container via worker node host name and TCP port 32000.

Step 4 — Configure Pod with PersistentVolume

Now we are going to switch gears a bit. In this step, we are going to deploy a new pod along with a PersistentVolume.

First, let’s create our StorageClass resource. A StorageClass resource provides a way for administrators to describe the “classes” of storage they offer. A StorageClass can help inform storage consumers about the storage resource by describing its type, quality of service, location, purpose, etc.

Create a file named “my-storageclass.yml” and enter the following code:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: localdisk
provisioner: kubernetes.io/no-provisioner
allowVolumeExpansion: true

We set the “allowVolumeExpansion” value to true in the event that we want to expand the volume. By default, this value is set to false. ☝🏽

Apply the StorageClass configuration:

kubectl apply -f my-storageclass.yml

View StorageClass:

kubectl get sc
kubectl describe sc

Next, let’s create our PersistentVolume. Create a file named “my-pv.yml” and enter the code below:

apiVersion: v1
kind: PersistentVolume
metadata:
name: nginx-pod-pv
spec:
storageClassName: localdisk
persistentVolumeReclaimPolicy: Recycle
capacity:
storage: 1G
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/k8s

In the code above we specify the storage capacity at 1 GB, the access mode, and the storage path location. The hostPath “/tmp/k8s” will be the directory created on the worker node where the pod is deployed. Lastly, we have our persistent volume reclaim policy set to recycle. The reclaim policy for a PersistentVolume tells the cluster what to do with the volume after it has been released of its claim. The Recycle reclaim policy performs a basic scrub (rm -rf /thevolume/*) on the volume and makes it available again for a new claim.

Persistent Volumes | Kubernetes

Apply the PersistentVolume configuration:

kubectl apply -f my-pv.yml

View PersistentVolume:

kubectl get pv
kubectl descirbe pv

Moving along, we will now create our PersistentVolumeClaim. Our PersistentVolumeClaim is a request for storage by a user.

To create our PersistentVolumeClaim, create a file named “my-pvc.yml” and enter the following:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pod-pvc
spec:
storageClassName: localdisk
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250M

In our PersistentVolumeClaim policy, the accessModes and storageClassName must match what is present in the PersistentVolume configuration. We also specify a storage request of 250 MB out of the available 1 GB of storage.

Apply PersistentVolumeClaim configuration:

kubectl apply -f my-pvc.yml

Finally, we can deploy our Pod! Create a file named “nginx-pod.yml” and enter the code below:

apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
restartPolicy: OnFailure
containers:
- name: nginx-pv
image: nginx
command: ['sh', '-c', 'echo "This is Persistent!" > /tmp/k8s/output.txt']
ports:
- containerPort: 80
volumeMounts:
- name: nginx-storage
mountPath: /tmp/k8s
volumes:
- name: nginx-storage
persistentVolumeClaim:
claimName: nginx-pod-pvc

The code above will create a Pod to run the following task. It will print “This is Persistent!” to a file named success.txt file that is stored in /tmp/k8s locally. We also made the PersistentVolume hostPath and Pod volumeMount mountPath the same for simplicity. The PersistentVolumeClaim is referenced when creating a new volume named “nginx-storage”. The pod will use the PersistentVolumeClaim to request access to the PersistentVolume.

Deploy Pod configuration:

kubectl apply -f nginx-pod.yml

Review PersistentVolume claim status:

kubectl get pv

As we can see, the CLAIM was successful with the “default/nginx-pod-pvc” entry present.

Verify Pod deployment was completed successfully and find the Node that the Pod deployed to:

kubectl get pods -o wide

Next, SSH into the Worker Node that you identified as being the Host for the “nginx-pod” and run the following.

cat /tmp/k8s/success.txt
Success!!!

As usual, I appreciate the time you put aside to read my blogs and walk through the demonstration. If you have any recommendations on how I can improve my blog and content please don’t hesitate to connect with me on LinkedIn at Dahmear Johnson | LinkedIn.

--

--

Dahmear Johnson
Nerd For Tech

Cloud Engineer ☁️ | DevOps Enthusiast ♾️ | 2x Azure/AWS Certified