The new modern Workload on vSphere

Navneet Verma
May 5 · 13 min read

Introduction

vSphere 7 with Tanzu is transforming the way people are approaching modernizing their on-prem application delivery. With the native integration of Kubernetes within the platform, the doors have opened to deliver the new application stack within the vSphere environment. Gone are those days when you could only deploy VMs within the vSphere platform. With the integration of Tanzu, you can now natively deploy new objects like pods, load balancers, Kubernetes clusters alongside VMs. With the release of the much-awaited VM Operator feature (version 7.0u2a), the DevOps persona can even leverage standard Kubernetes processes and CLIs to deploy VMs along with pods. In this article, with an easy-to-follow example, I will discuss how the DevOps persona can leverage their Kubernetes tool of choice to deploy an application stack consisting of Kubernetes clusters, VMs, pods, registries, and load balancers.

New Workload Concept

As mentioned previously, a few years back, the developers requested VMs to be provisioned for application deployments. The vSphere Admins deployed the VMs and then handed over controls to the developers, who deployed applications on these VMs. The entire process took days to weeks, and every layer of the infrastructure was treated as pets. This led to automation in VM deployments.

With the introduction of Kubernetes, the Operations team was responsible for setting up and managing the Kubernetes clusters. The DevOps persona (in some cases, the same group of individuals) would then deploy the containerized applications on these Kubernetes clusters. In most cases, the applications, through containerization, had become cattle, but unfortunately, the underlying infrastructure was still being treated as pets. This brought its own sets of challenges.

As processes matured, the operations team also started looking at options to treat the Kubernetes infrastructure as cattle. Cluster API was one of those trailblazing projects that helped alleviate some of the heaving liftings of the day-to-day Kubernetes cluster lifecycle management. Now the DevOps persona could create and manage their Kubernetes infrastructure in the same declarative way they managed their application deployments.

As we started the next phase of the app modernization journey, we discovered that the next set of applications was quite sticky or complex. They have numerous points of interaction, with monolithic stateful VM interactions leading to extreme difficulty in containerization.

How do we modernize such an application? How do we treat this entire setup as cattle and have a declarative YAML to deploy and maintain the desired state of the said application?

vSphere with Tanzu to the rescue

Existing and new customers of vSphere can benefit immensely from the new features that have now been natively integrated into vCenter. There are numerous articles on the web summarizing and detailing how the native integration of Kubernetes, Cluster API, Harbor, and other CNCF technologies within vSphere is delivering value to the customer. With a simple click of a button (well, maybe a short wizard), the vSphere admin can now convert the existing vSphere clusters to a Supervisor Cluster that understands the Kubernetes language. This Supervisor Cluster can directly deliver numerous cloud-native services for consumption by the DevOps persona. Some of these services are —

  • Container Registry using Harbor
  • Kubernetes Cluster LCM using Cluster API
  • Load Balancer Services
  • Authentication Services
  • Ability to run containers natively of ESXi
  • Namespace and resource management
  • Virtual Machine LCM using VMOperator
  • And many more to come.

Let's now look at how a DevOps persona can leverage these services and features to deploy a complicated application using simple declarative configuration files.

Scenario

We will use an elementary example that could quickly become complicated if multiple restrictions are enforced on its deployment criteria.

Let us use the standard Kubernetes WordPress deployment for our sample application. In this example, a MySQL pod is deployed(from its image in Dockerhub) using a persistent volume and exposed thru a service. A WordPress pod (using its image from Dockerhub) is deployed that accesses the MySQL DB service. The WordPress app is exposed thru a LoadBalancer service on port 80, which the user can access.

Let us assume some common restrictions are imposed on the DevOps team.

  • The vSphere team does not want the DevOps team to exceed the resource quota in a shared infrastructure environment.
  • The DevOps team needs to manage the LCM of the Kubernetes environment securely.
  • The users are not allowed to download images from Dockerhub due to security restrictions.
  • The MySQL DB is quite large and/or needs substantial computing resources.
  • Containerizing the MySQL DB may not be an option due to these or some other security restrictions.

These are some conventional restrictions and requirements that you would encounter in any IT organization and quickly complicate the application’s architectural patterns.

Now, the DevOps team is responsible for deploying their Kubernetes cluster. They need to provide an internal container registry. Their database needs to stay on a VM whose lifecycle needs to be managed securely. All these components need to reside within their boundaries without exceeding the allocated quota securely. Can all this be managed with simple YAML files that the team can operate in a GitOps way? The answer is yes.

Solution

Platform setup

The journey starts with the vSphere admin enabling the Workload Management on a vSphere 7 U2a or greater environment. This enables all the features that we earlier discussed on the Cluster.

The vSphere admin also creates the required content libraries to make the VM images available for consumption by the DevOps users within their namespaces. Their content creators have provided these images through subscribed content libraries — e.g., Vmware provides images for Tanzu Kubernetes Cluster images and VM images.

Next, the vSphere admin enables the Harbor registry service on the Supervisor Cluster. This feature is available if you are using NSX as the networking stack. (For non-NSX based networking stack, a similar solution can be enabled using VM operators). As you can see from the image below, this process installs several pods on the ESXi servers and configures a Harbor endpoint accessible to the end-users for consumption.

The vSphere admin can also view these objects using the kubectl CLI as these are nothing but Kubernetes objects residing on the Supervisor Cluster.

$ kubectl vsphere login --server wcp.navlab.io --vsphere-username administrator@vsphere.local --insecure-skip-tls-verify$ kubectl get svc -n vmware-system-registry-756328614                              
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
harbor-756328614 LoadBalancer 10.96.0.109 192.168.10.163 443:31012/TCP 11m
harbor-756328614-harbor-core ClusterIP 10.96.0.202 <none> 80/TCP 10m
...
$ kubectl get pods -n vmware-system-registry-756328614 -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
harbor-756328614-harbor-core-74dc84785-jhw54 1/1 Running 0 16m 10.244.0.24 watson.navlab.io <none> <none>
harbor-756328614-harbor-database-0 1/1 Running 0 16m 10.244.0.18 watson.navlab.io <none> <none>

Finally, the vSphere admin enables the self-service namespace on the Supervisor cluster. With this option, the administrator can templatize the resources and access of future namespaces that each DevOps user can create and consume on the Supervisor cluster.

That is all the upfront preparation that is needed. Now the infrastructure is handed over to the DevOps team to deploy their applications within this environment.

Application Deployment

The DevOps user can now login to the Supervisor cluster using and start interacting with it to create objects using the Kubernetes API/CLI. The first step is to create a namespace where the application will reside. The namespace created inherits all the quotas, limits, and RBAC enforced within the template during the previous step. If the integrated Harbor registry (see above) is set up, the creation of the namespaces automatically triggers the creation of projects with the associated RBAC within the Harbor environment.

$ kubectl vsphere login --server wcp.navlab.io --vsphere-username nverma@vsphere.local --insecure-skip-tls-verify$ cat ns.yaml---
apiVersion: v1
kind: Namespace
metadata:
name: demo1
$ kubectl create -f ns.yaml
namespace/demo1 created

The next two steps are manual (and hopefully will be automated by the vSphere product team). The newly created namespaces need to have a set of VMclasses and Content Libraries bound to them. This is currently possible through the Center interface.

$ kubectl get virtualmachineclassbinding -n demo1                                  
NAME VIRTUALMACHINECLASS AGE
best-effort-large best-effort-large 4m56s
best-effort-medium best-effort-medium 4m56s
best-effort-small best-effort-small 4m56s
best-effort-xsmall best-effort-xsmall 4m55s
$ kubectl get contentsourcebindings -n demo1
NAME CONTENTSOURCE
9c54ca6c-fdf3-4ed8-b628-1a622641ebb7 9c54ca6c-fdf3-4ed8-b628-1a622641ebb7
9f4c1210-2c9b-46d2-81ad-f8f5139e4e74 9f4c1210-2c9b-46d2-81ad-f8f5139e4e74

The next step would be to deploy the MySQL database that would be running on a CentOS VM (At the time of writing this article, VMware provides CentOS-based OVA for VM deployments using VMOperator. This is delivered through the Cloud Marketplace. Documentation is available on their website on how to consume the Marketplace image through content libraries). The first step is to build a cloud-init.yaml file. Documentation on how to build a cloud-init file can be referenced here. A sample file for MySQL automation could be similar to this —

#cloud-config
chpasswd:
list: |
centos:password
expire: false
groups:
- docker
users:
- default
- name: centos
ssh-authorized-keys:
- ssh-rsa AAAAB3Nz...m50YwPyUFoUAUOXaqM6J8sXJd1THHFXBd/9jmnI60abFj50hqNuk62cN9kHW55HSO/L/Llz/PZyuk0wTbfqzc8BRA3Z0YiLo+I/LIc0= nverma@bastion0
sudo: ALL=(ALL) NOPASSWD:ALL
groups: sudo, docker
shell: /bin/bash
# Enable password based authentication if needed
# ssh_pwauth: True
network:
version: 2
ethernets:
ens192:
dhcp4: true
package_update: true
packages:
- mysql-server
- net-tools
runcmd:
- systemctl enable mysqld
- systemctl start mysqld
- sudo mysql -e "CREATE DATABASE wordpress;"
- sudo mysql -e "CREATE USER 'wordpress_user'@'%' IDENTIFIED BY 'password';"
- sudo mysql -e "GRANT ALL ON wordpress.* TO 'wordpress_user'@'%'"
- sudo mysql -e "FLUSH PRIVILEGES;"
- sed -i '$abind-address=0.0.0.0' /etc/my.cnf.d/mysql-server.cnf
- systemctl restart mysqld
- firewall-offline-cmd --add-port=3306/tcp
- firewall-cmd --reload

The base64 encoded cloud-init.yaml value [cat cloud-init.yaml |base64 -w0;echo] file is added to the VM deployment configuration file. Details on how to build the specification file are available on VMware’s website.

# vm.yml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
namespace: demo1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
storageClassName: nav-gold-policy
volumeMode: Filesystem
---
apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachine
metadata:
labels:
vm-selector: mysql-centosvm
name: mysql-centosvm
namespace: demo1
spec:
imageName: centos-stream-8-vmservice-v1alpha1-1619529007339
className: best-effort-small
powerState: poweredOn
storageClass: nav-gold-policy
networkInterfaces:
- networkType: nsx-t
networkName: ""
volumes:
- name: my-centos-vol
persistentVolumeClaim:
claimName: mysql-pvc
readinessProbe:
tcpSocket:
port: 22
vmMetadata:
configMapName: centos-cloudinit
transport: OvfEnv
---
apiVersion: v1
kind: ConfigMap
metadata:
name: centos-cloudinit
namespace: demo1
data:
user-data: [insert your base64 encoded cloud-init.yaml value here]
hostname: centos-mysql
---
apiVersion: vmoperator.vmware.com/v1alpha1
kind: VirtualMachineService
metadata:
name: mysql-vmservices
spec:
ports:
- name: ssh
port: 22
protocol: TCP
targetPort: 22
- name: mwsql
port: 3306
protocol: TCP
targetPort: 3306
selector:
vm-selector: mysql-centosvm
type: LoadBalancer

The MySQL VM and its associated components are deployed. It takes around 3–4 minutes to deploy, power on, and execute the cloud-init process before the VM is ready for use.

$ kubectl apply -f vm.yml -n demo1                                               
persistentvolumeclaim/mysql-pvc created
virtualmachine.vmoperator.vmware.com/mysql-centosvm created
configmap/centos-cloudinit created
virtualmachineservice.vmoperator.vmware.com/mysql-vmservices created
$ kubectl get svc -n demo1 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql-vmservices LoadBalancer 10.96.0.147 192.168.10.162 22:31887/TCP,3306:30863/TCP 14m

Verification —
To verify that the MySQL VM was successfully created and accessible, you can ssh from the host whose RSA key was shared in the cloud-init. yaml. Login to the VM to validate MySQL is successfully listening on port 3306.

$ ssh centos@192.168.10.162                                                                          [centos@centos-mysql ~]$ netstat -an
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN
...

Awesome !! Now we need to deploy the WordPress application. Before we do that, we need to create a Kubernetes cluster. This is also a straightforward operation using Cluster-API running on the Supervisor cluster.

# cluster.yaml
---
apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: workload-vsphere-tkg1
namespace: demo1
spec:
distribution:
version: v1.18.15
topology:
controlPlane:
count: 1
class: best-effort-medium
storageClass: nav-gold-policy
volumes:
- name: etcd
mountPath: /var/lib/etcd
capacity:
storage: 4Gi
workers:
count: 2
class: best-effort-medium
storageClass: nav-gold-policy
volumes:
- name: containerd
mountPath: /var/lib/containerd
capacity:
storage: 30Gi
settings:
network:
services:
cidrBlocks: ["198.51.100.0/24"]
pods:
cidrBlocks: ["192.0.2.0/22"]
storage:
classes: ["nav-gold-policy"]
defaultClass: nav-gold-policy

Use the above file to create the cluster. Depending on the number of nodes requested, it may take anywhere from 5–10 minutes for the cluster creation to be completed. For automation purposes, you could watch the status of the Tanzu Kubernetes Cluster (tkc) object to verify if the Kubernetes cluster is up and ready for consumption.

$ kubectl apply -f cluster1.yaml -n demo1                                          tanzukubernetescluster.run.tanzu.vmware.com/workload-vsphere-tkg1 created$kubectl get tkc -n demo1                                                                                                 
NAME CONTROL PLANE WORKER DISTRIBUTION AGE PHASE TKR COMPATIBLE UPDATES AVAILABLE
workload-vsphere-tkg1 1 2 v1.18.15+vmware.1-tkg.1.600e412 13h running True [1.19.7+vmware.1-tkg.1.fc82c41]

Once the cluster is ready, we are now prepared to deploy the WordPress application to the cluster. As stated in the pre-requisites, we cannot pull the WordPress image directly from Dockerhub. We need to host the image in the private Harbor registry set up as vSphere pods running directly on the ESXI servers (see above). From a jump host that has access to the internet, download the WordPress image, tag it with the Harbor registry name and then upload it to the registry as per the sample commands below —

$ docker login https://192.168.10.163                                   Username: nverma@vsphere.local
Password:
WARNING! Your password will be stored unencrypted in /home/nverma/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded$ docker pull wordpress:5.7-apache 5.7-apache: Pulling from library/wordpress
f7ec5a41d630: Pull complete
...
Status: Downloaded newer image for wordpress:5.7-apache
docker.io/library/wordpress:5.7-apache
$ docker images
docker images REPOSITORY TAG IMAGE ID CREATED SIZE
wordpress 5.7-apache 7fda6c241024 4 days ago 551MB
...
$ docker tag 7fda6c241024 192.168.10.163/demo1/wordpress:5.7-apache
# Note the IP address of the Harbor registry is the same as the one in the screenshot above.
# A project called demo1 was automatically created when the namespace was deployed. This is the project where we will be uploading the image
$ docker push 192.168.10.163/demo1/wordpress
The push refers to repository [192.168.10.163/demo1/wordpress]
623e5ea375d9: Pushed
...
5.7-apache: digest: sha256:ff25d3a299dc7778cdc51793f899f4a5a745cc78a00632fb466f59d96cbf83b5 size: 4709

Now that the image has been uploaded to the local Harbor repository, we can use standard Kubernetes methods to deploy the WordPress application. First, we log in to the newly deployed Kubernetes cluster and modify the Kubeconfig active context. A secret object that contains the credentials to the private registry is also created.

$ kubectl vsphere login --server wcp.navlab.io --vsphere-username nverma@vsphere.local --insecure-skip-tls-verify --tanzu-kubernetes-cluster-name workload-vsphere-tkg1 --tanzu-kubernetes-cluster-namespace demo1...$ kubectl config use-context workload-vsphere-tkg1
Switched to context "workload-vsphere-tkg1".
$ kubectl create secret docker-registry regcred --docker-server=192.168.10.163 --docker-username=nverma@vsphere.local --docker-password="my vsphere.local password" --docker-email=nverma@vsphere.local

If this is the first time connecting to the TKC, you may want to relax the pod security policy as per your requirements. This is documented on Vmware’s website. The sample YAML (referenced from the Kubernetes documentation and modified to meet the current requirements) that will be used by the DevOps to deploy WordPress is shown below. The application is exposed through a service-type LoadBalancer on port 80. Note how the MySQL service (running on a VM in the previous step) is referenced with the MYSQLDB_SERVICE_HOST environment variable. The MYSQLDB service is configured as a selector-less service to directly access the load balancer service of the VM exposed in the previous steps.

# wordpress.yaml
---
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: 192.168.10.163/demo1/wordpress:5.7-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: "$(MYSQLDB_SERVICE_HOST)"
- name: WORDPRESS_DB_PASSWORD
value: "password"
- name: WORDPRESS_DB_USER
value: "wordpress_user"
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: mysqldb
spec:
ports:
- name: mysql
port: 3306
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
name: mysqldb
subsets:
- addresses:
- ip: 192.168.10.162
ports:
- name: mysql
port: 3306
protocol: TCP

Once you apply the above YAML and let the application startup on the new cluster, the WordPress application would be accessible on port 80.

$ kubectl apply -f wordpress.yaml                     
service/wordpress created
persistentvolumeclaim/wp-pv-claim created
deployment.apps/wordpress created
service/mysqldb created
endpoints/mysqldb created
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default mysqldb ClusterIP 198.51.100.47 <none> 3306/TCP 3m57s
default wordpress LoadBalancer 198.51.100.230 192.168.10.165 80:32379/TCP 3m57s
...

Accessing our application on port 80 within a web browser, we get our familiar WordPress admin page!!!

Conclusion

As discussed at the beginning of the article, using the new concepts and tighter integrations of Kubernetes with vSphere, we can deliver complex workloads within the vSphere with the Tanzu environment. The below picture shows how our new architecture looks. Note that to keep this article brief, we did not introduce an additional integration point of a VM App (shown in gray). This could easily be achieved by concepts similar to how the Database VMs were deployed. This entire setup was contained within the demo1 namespace and hence adhered to all the constraints imposed on the namespace.

Since there is little to no prep work required by the Operations team in standing up this entire application stack, the power has been unleashed into the hands of the developers. They can use standard CICD tools to automate such complex workloads within the vSphere platform using native Kubernetes integrations.

For reference, see the picture below for an annotated screenshot of the Center environment showcasing all the objects that are now available on the vCenter server.

Github link to the files referenced in this blog — https://github.com/papivot/the-new-modern-workload-on-vSphere

Analytics Vidhya

Analytics Vidhya is a community of Analytics and Data…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store