Kubernetes Magic: Spinning Up Dual Deployments and Shared Services with Custom ConfigMaps and Persistent Storage

A comprehensive guide to spinning up deployments and shared services using custom ConfigMaps and persistent storage in Kubernetes.

Aaron Bachman
Cloud Native Daily
10 min readJul 23, 2023

--

On to a new magical adventure, this time with Kubernetes! Well maybe not magical but definitely cool. Today we will embark on a journey to spin up two deployments. The first deployment, will consisting of two pods, will run the NGINX image while displaying a custom index.html page that declares “This is Deployment One,” thanks to our ConfigMap’s mystical guiding hand. The second deployment will also hold two pods with the NGINX image, enchanting visitors with its own custom index.html page bearing the inscription “This is Deployment Two,” courtesy of its configmap.yaml file.

But the real magic happens when we create a service that unites both deployments, offering access to both using a single IP address and port number. No hocus-pocus is needed; it’s Kubernetes in action, bringing these deployments together.

To verify, we’ll use the curl command to witness the index.html pages from both Deployment One and Deployment Two.

For those looking to take their Kubernetes skills to the next level, we’ll also delve into creating a Kubernetes persistent storage manifest, providing 250MB of disk space on the host computer in the /temp/k8s directory. This ensures our data stays safe and available, without any magical tricks required.

If you are joining me, I really appreciate it, so on with this adventure where we explore dual deployments, shared services, custom ConfigMaps, and the practical magic of Kubernetes to make it all work seamlessly!

In this project, I will try to be short and to the point. Go here for information on Kubernetes! This is a great resource and will cover all aspects of Kubernetes.

For my project, I downloaded microK8s, and kubectl onto an EC2 instance in AWS. In this project, I will not be covering installing microK8s or kubectl, but there are a lot of resources on the web describing just how to do this.

Step 1: Create ConfigMaps

I have a ConfigMap for both deployments named “deployment-one-configmap.yaml” and “deployment-two-configmap.yaml”.

In the screenshot above, you can see that I used vim to add the code needed for each deployment. Below you see what I added for both files.

deployment-one-configmap.yaml
deployment-two-configmap.yaml

I now need to apply these new files to my Kubernetes cluster. Using the commands below.

kubectl apply -f deployment-one-configmap.yaml
kubectl apply -f deployment-two-configmap.yaml

The following command get kubectl configmap will show you if those files have been applied.

Step 2: Create Deployments for your pods

Next, I created two deployment files, each using the nginx image and referencing the respective ConfigMaps that I created earlier for the custom index.html pages.

Deployment One: named (deployment-one.yaml) is below.

apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-one
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: aaron-nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html
volumes:
- name: config-volume
configMap:
name: deployment-one-configmap

Save this and then create another file for deployment two and add the code below to the file named “deployment-two.yaml”.

apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-two
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: aaron-nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html
volumes:
- name: config-volume
configMap:
name: deployment-two-configmap

We now need to apply these deployments to the cluster.

kubectl apply -f deployment-one.yaml
kubectl apply -f deployment-two.yaml

After looking around a little more I realized that instead of the command above, you could run a simpler command kubectl apply -f deployments.yaml. This will accomplish the same thing.

Now if we run the command kubectl get pods:

Step 3: Create a service

We will now create a service that points to both deployments.

The file is named “(service.yaml)”, and is below.

apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: http
port: 80
targetPort: 80
nodePort: 30080 # Add the desired NodePort number here
type: LoadBalancer

Note: I had several versions of each of these files. I hope these are all pretty much copy and paste, but you may have issues with syntax or other issues. After you save this file: apply it with:

kubectl apply -f service.yaml

If you run kubectl get service you should see something similar to this.

Step 4: Validate the Deployments

To do this I first had to obtain the Ip of the service that was created. The following command will do this kubectl get svc nginx service.

In the next command input, I input the Cluster-IP address, and service port that is listed above.

curl Service_IP_Address:Service_Port

You should see an output like this, and the “Deployment One” and “Deployment Two” alternating in the curl response.

I had an issue here, in my original service.yaml file I had used nginx in the app under selector. I kept getting only the first deployment, and then I tried using both like this but specifying multiple labels in a selector like this is not allowed.

apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
matchLabels:
app: nginx-deployment-one
app: nginx-deployment-two
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP

I ended up using the following which was not a problem

apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: http
port: 80
targetPort: 80
nodePort: 30080
type: LoadBalancer

Verify your deployments with kubectl get deployments.

Next, we can use the command kubectl gt svc to get the IP address and port for our service.

Below you can see the successful curl command points to both

I had to go back to edit my second config file; unfortunately, I forgot to get a new screenshot of the curl command with the proper title for the second deployment.

I forgot to get a screenshot of the second one, but if you keep giving this command you should see “This is Deployment Two”.

Step 5: Advanced portion

In this portion, I will create a Kubernetes persistent storage manifest that gets 250MB of disk space on the host computer in the /temp/k8s directory. Create a corresponding persistent storage claim manifest file.

After this, I had to make sure that I added the volume mount for the persistent volume claim for both of the deployments.

I added volume mounts for the Persistent Volume Claim (PVC) to both “deployment-one-configmap” and “deployment-two-configmap” files. This will allow the pods to use persistent storage.

This screenshot only shows one file

Complex portion:

For the complex portion of this project, we are going to create a Kubernetes CronJob that spins up both deployments and the service at a specific time, and then test it.

The first thing we are going to do is create a new file called “cronjob.yaml”.

Once created I used VIM to add the new code that is needed for this step.

Run the command kubectl apply -f cronjob.yaml to apply cronjob to your Kubernetes cluster.

Test the cronjob:

Next up we want to test the cronjob to make sure that it is functioning properly. Before we do that we will use the following command to make sure that the cronjob has been created and is active.

kubectl get cronjobs

Next up, to test we will trigger the cronjob manually. Use the following code to trigger yours. Note: Make sure to have the correct name for your cronjob, and insert it in here if it is not my-cronjob.

kubectl create job --from=cronjob/my-cronjob my-cronjob-manual-run

This command creates a new Job named my-cronjob-manual-run based on the named your cronjob name CronJob. This Job will run once and initiate the deployments and service based on the schedule defined in the CronJob.

Next, we will see about the status of the job to make sure that it was completed with the following command. When running this command you will want to make sure that my-cronjob-manual-run Job appears in the list, and its status shows "Completed".

kubectl get jobs
kubectl get deployments
kubectl get services

I ended up creating a second job, with a different name to see if the outcome would be different, but neither cron job completed successfully.

To troubleshoot further I ran kubectl describe job my-cronjob-manual-run. At the bottom you can see I had deleted one, and that the current Job has reached the specified backoff limit, so it failed.

To continue troubleshooting I ran kubectl get pods.

Next, I ran kubectl describe pod <pod-name>, and received the following output.

I ran multiple commands trying to find information about the pod for my cron job, only to realize after a significant amount of time that because the job failed, the pod was actually deleted and not available. Why my pod named (my-cronjob-manual-run-9rkvt) still says 1/2 ready does not make sense. When I ran kubectl logs my-cronob-manual-run-9rkvt, the output still says that the pod is not available, which makes sense if the pod has in fact been deleted. I tried to run the command to describe the pod again with the same results.

When a Pod enters a “CrashLoopBackOff” state, it means that the Pod keeps crashing immediately after starting due to some issue. Kubernetes automatically restarts the Pod, but if the issue persists, it will keep failing in a loop.

At this point, I have not been able to get any further in troubleshooting, but I wanted to include it here because even though I have not solved this problem yet, a lot of learning was happening and that is what this journey is all about!

Conclusion:

During this project, I had the opportunity to work with various Kubernetes components, such as Pods, Deployments, Services, ConfigMaps, and more. I gained insights into how these building blocks come together to create a highly resilient, scalable, and efficient infrastructure for applications.

One of the most significant advantages I see with Kubernetes is its ability to handle complex workloads. Whether you’re running a small web application or a large-scale microservices architecture, Kubernetes ensures that your application runs smoothly, can scale seamlessly, and recovers from failures automatically.

Thank you again for joining me on this journey. I hope you’ve found this exploration of Kubernetes as rewarding and enlightening as I have.

--

--

Aaron Bachman
Cloud Native Daily

Level Up in Tech student. DevOps, Cloud engineering, AWS, Terraform.