vSphere with Tanzu’s latest updates (v 7.0u2a) introduces some exciting features of using Kubernetes to manage the lifecycle of virtual machines. This has opened up numerous opportunities to modernize existing workloads. The author has covered some of these patterns in another previous article on the same technology. Today, we will dive into another example of deploying a container registry as a VM using VM Operators. Harbor is an automatic choice here as it is included as part of the Tanzu offering from VMware and is also one of the most robust and powerful container registries in the CNCF landscape.
We will not be diving into details on how to consume the VM Operator, as discussed in detail in various other blogs. We will directly delve into generating the required manifests for the DevOps persona to deliver and consumer a stable registry within a vSphere environment.
The ask here is to deploy an enterprise-grade highly-available Harbor registry, with TLS enabled with additional disk storage allocated for its data storage. The base image is a CentOS Stream 8 OS.
There are two manifest files of interest. The first one is the cloud-init YAML file that is required during the cloud-init stage of the VM deployment. The second file is the actual VM deployment YAML files that consume the above cloud-init YAML file and deploy the VM using the VM Operator technology on vSphere. All relevant files for this blog are available here.
#cloud-config########## SECTION 1 #########
- name: centos
- ssh-rsa AAAAB3....Tbfqzc8BRA3Z0YiLo+I/LIc0= nverma@bastion0
sudo: ALL=(ALL) NOPASSWD:ALL
groups: sudo, docker
shell: /bin/bash########## SECTION 2 #########
- [ /dev/sdb1, /data, "xfs", "defaults", "0", "2" ]########## SECTION 3 #########
name: Docker CE Stable - $basearch
- wget########## SECTION 4 #########
write_files: - path: /harbor/tls.pem
permissions: '0444' - path: /harbor/tls.key
-----BEGIN PRIVATE KEY-----
-----END PRIVATE KEY-----
permissions: '0444' - path: /harbor/install-harbor.sh
/harbor/install.sh --with-trivy --with-notary
if [ $? -ne 0 ]
echo "Failed. Trying again"
/harbor/install.sh --with-trivy --with-notary
permissions: '0755' - path: /harbor/harbor.yml
# port: 80
# external_url: https://reg.mydomain.com:8433
permissions: '0644'########## SECTION 5 #########
- parted /dev/sdb mklabel gpt
- parted /dev/sdb mkpart primary xfs 1MB 10240MB
- /sbin/mkfs.xfs /dev/sdb1
- mkdir -p /data
- mount -t xfs --rw /dev/sdb1 /data
- curl -L "https://github.com/docker/compose/releases/download/1.29.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
- systemctl enable docker
- systemctl start docker
- wget -q https://github.com/goharbor/harbor/releases/download/v2.2.1/harbor-offline-installer-v2.2.1.tgz
- tar --skip-old-files -xzvf harbor-offline-installer-v2.2.1.tgz
- rm -f harbor-offline-installer-v2.2.1.tgz
Let us look at some of the annotated sections in the YAML file. Users can and must modify sections based on their requirements.
Section 1 — User configuration
Users can modify this section as per their requirements. The actual requirements to log in (through SSH) to the VM are only for troubleshooting purposes. For security reasons, users could customize this and relevant keys added/removed as per environmental requirements. Since Harbor requires Docker as a component to execute, the user that will be login in for troubleshooting purposes should be made a member of the Docker group for seamless troubleshooting.
Section 2 — System configuration (network and disk)
Users can use this section to provide the networking and storage configurations. The networking configurations are pretty straightforward. The author attempted to use the cloud-init features to configure the storage, but due to some base image packaging issues, the configurations were not working. Hence the author took a workaround. Currently, the mount operation in this section updates the
/etc/fstab file without actually mounting the filesystem (to
/data), which is currently performed in the final section.
Section 3 — Packages installation
Harbor requires Docker, contained, and its pre-required packages to be installed. This section sets up the relevant Stable Docker CE yum repository for cloud-init to download and install the appropriate packages. Net tools are optional (required for troubleshooting).
Section 4 — Generating configuration files
Four files are created for consumption during the cloud-init stage.
- /harbor/tls.pem and /harbor/tls.key is the certificate file and private key file required to enable SSL and expose the application on port 443. In this example, the DNS records in the SSL certificates point to the hostname
harbor-centos.navlab.io(Referenced later in the /harbor/harbor.yml file)
(see the below excerpts from the Harbor website)
In production environments, always use HTTPS. If you enable Content Trust with Notary to properly sign all images, you must use HTTPS. To configure HTTPS, you must create SSL certificates. You can use certificates that are signed by a trusted third-party CA, or you can use self-signed certificates. This section describes how to use OpenSSL to create a CA, and how to use your CA to sign a server certificate and a client certificate. You can use other CA providers, for example Let’s Encrypt. The procedures below assume that your Harbor registry’s hostname is
yourdomain.com, and that its DNS record points to the host on which you are running Harbor.
- /harbor/harbor.yaml This is the actual configuration file that is used by the installation script to install harbor. Users can find the details here. Note that it references the SSL cert files that got created above. Once again, we need to highlight that the hostname value points to the DNS records in the SSL certificate. We also set the admin password to a nondefault value. Another item to note is the
data_volumevalue points to the new filesystem created just for this purpose.
- /harbor/install-harbor.sh This is the install script that used the above harbor.yaml file and installs Harbor on this VM with Trivy and Notary enabled.
Section 5 — Putting it all together.
This section is where the magic happens. Users can use the first five commands to add additional storage and mount it to the /data mount point. This is where Harbor will store all its data. (note that these may be removed if and when cloud-init disk creation issues are resolved )
The following five commands are used to download the latest version of docker-compose, install it with correct permissions and start the docker daemon.
Lastly, we download the latest Harbor binaries, extract them to the exact location where we created all the files in Section 4. This is followed by calling the installing script to perform the install.
- networkType: nsx-t
- name: my-centos-vol
- name: ssh
- name: harbor
- name: notary
This file will deploy a VM in the Supervisor Cluster context in the demo1 namespace. Users can modify the namespace as per their requirements. We will discuss the relevant objects created here.
The YAML is used to request a 10GB PVC from the Storage class that has been made available to the Supervisor CLuster within vSPhere with Tanzu.
Users need to create a
configmapfrom the cloud-init.yaml file that we discussed earlier. The
user-data is a base64 encoded value of the cloud-init.yaml file. The user can generate the data by executing the following command
cat cloud-init.yaml|base64 -w0;echo . Copy and paste the base64 encoded value in the vm.yaml file. The vSphere
hostname is also configured through the
Users can use the relevant VMclass to provide the required compute capacity to the Harbor VM. If this VM’s
networkType is a
nst-tbased WCP environment, the
networkName needs to be left blank. If the
vsphere-distributed then the relevant workload
networkNameneeds to be populated in the
Note that the previously requested PVC is mounted as a volume to this VM. This presets a raw disk to the VM, which then gets formated, partitioned, and mounted through the cloud-init process.
Lastly, users expose the Harbor application ports 443, 4443 (notary), and 22 (troubleshooting purposes) for access through the service type
Once these files have been prepared, the user can either use
kubectl or a CICD process to deploy the vm.yaml file within the Supervisor Cluster context. This will deploy a fully functional Harbor VM. Since a lot gets done during the cloud-init stage, it may take 4–5 minutes before the VM is fully operational. The user can grab the load balancer external IP address for the newly created VirtualMachineService. Once the DNS is modified, to point this IP address to the hostname of the harbor VM (see cloud-init.yaml section), we should be able to access the application seamlessly.