Deploying CloudBees Core on VMware Cloud PKS
--
This post is authored by Dan Illson with the assistance of Jeff Fry
CloudBees Core is a Continuous Integration and Continuous Deployment (CI/CD) engine based on the Jenkins open source automation server. This offering extends Jenkins by embedding best practices, rapid onboarding, and additional functionality to facilitate security and compliance controls. Organizations use CloudBees Core to provide a centrally-managed CI/CD service while maintaining a self-service experience for individual teams. More information about this offering is available here.
VMware Cloud PKS™ is an enterprise-grade Kubernetes-as-a-Service offering in the VMware Cloud Services portfolio that provides easy to use, secure, cost effective, and fully managed Kubernetes clusters. VMware Cloud PKS enables users to run containerized applications without the cost and complexity of implementing and operating Kubernetes. For more information about this service and to request access, please visit http://cloud.vmware.com/vmware-kubernetes-engine.
Note: VMware Cloud PKS was formerly known as VMware Kubernetes Engine. Certain items, such as the command line package, reflect the previous name.
Cluster Creation and Preparation
The first step in creating a CI/CD environment with these two offerings is to create and prepare the Kubernetes cluster. This is easily accomplished in a few short steps. In order to begin, however, access to the Cloud PKS service will be required. Access to the service can be requested here.
To begin the process, login to obtain the necessary command line tools to interface with Cloud PKS service and the Kubernetes cluster itself (assuming the tools haven’t been previously installed). These tools are:
- kubectl
- helm
- VKE command line package
Kubectl and the ‘vke’ command line package can be obtained within the Cloud PKS web interface. To get these utilities, log in to the Cloud PKS web UI and click on the ‘Developer center’ link in the vertical navigation bar at the left edge of the screen. Click on the ‘Downloads’ tab in the Developer center panel and download both the CLI and kubectl packages for the required operating system as seen in Figure 1 below. Helm must be acquired separately. For detailed instructions on configuring helm, please follow this link: https://docs.helm.sh/using_helm/#installing-helm.
Once the required tools have be installed, the next step will be to stand up a Kubernetes cluster. Creating a Kubernetes cluster under Cloud PKScan be done via the web UI or from the command line package. From the web, log in to the service and click on the link labeled ‘New Smart Cluster’. Once there, the user will need to select from a few options:
- Deployment type: Choose ‘Development Cluster’ to minimize spend
- Region: Select one from the available list
- Privileged Mode: Check this box as it will be required to perform container image builds within the cluster
- Name: Assign the cluster a descriptive name for your reference
This activity can be performed from the command line package as well. As a note, the remainder of the procedure described in this article will be performed via the command line. To do so, first login with the following command:
vke account login -t <organization-id> -r <refresh-token>
The two values surrounded by angle brackets ‘<>’ are placeholders. To find these values, log into the Cloud PKS web interface. From the landing page, click on the ‘Developer center’ link in the vertical navigation bar on the left edge of the screen, as show in Figure 2 below. Once there, the organization id will be visible in the vke account login
command example on the Overview tab. In the image below, the org-id value is redacted by a rectangular bar.
To retrieve the necessary refresh-token
value, follow the link labeled ‘Get Refresh Your Token’ just above the example command on the right side of the screen. A redacted example of the screen displaying API or Refresh tokens is show below in Figure 3. Then use the fully populated command to login to the service via the command line.
After logging in via the command line package, run the following command to create a new cluster:
vke cluster create --name <cluster name> --region <region> --privilegedMode
By default, a development cluster is created, so there’s no need to specify that option from this command. As with the UI driven example, select and name and region according to preference, and enable ‘privileged mode’.
To gain access to the cluster once it has been created, use the following command to gain access via the kubectl utility:
vke cluster auth setup <cluster name>
Replace <cluster name>
with the name chosen for the cluster when the vke cluster create
command was run previously.
Helm Configuration
To continue with the process, Helm, ‘The Package Manager for Kubernetes’ is required. The CloudBees Core components will be installed via a Helm chart. The instructions to install Helm on a variety of platforms can be found here. Once Helm has been installed and the vke cluster auth setup
command from the previous section has been executed, run the following command to install Tiller, the cluster-side component of Helm:
helm init
This will likely trigger a worker node addition within the kubernetes cluster, so it may take a few minutes before the tiller pod is available and running. To monitor the progress of the tiller pod, run the following command:
kubectl get pods -n kube-system -w
Installing CloudBees Core
The first step in installing the CloudBees core components onto the prepared kubernetes cluster will be to build the helm chart. To begin, clone this repository from GitHub: https://github.com/cloudbees/core-helm-vke. This repository was created by Jeff Fry, Senior Business Development Engineer at CloudBees. Once the repository is cloned, navigate to the base directory of the local copy of the repository and run the following command to build the Helm chart:
helm package ./CloudBeesCore
Once the Helm chart is built, two Kubernetes namespaces will need to be created. One will house the nginx ingress controller, and another ‘cloudbees’ will house the CloudBees Core deployment. A clusterrolebinding
object will also be necessary to ensure the correct permissions for the nginx ingress controller.
kubectl create namespace cloudbees
kubectl create namespace ingress-nginx
kubectl create clusterrolebinding nginx-ingress-cluster-rule --clusterrole=cluster-admin --serviceaccount=ingress-nginx:nginx-ingress
The next step is to install the ingress controller from its own stable helm chart. Note that the controller.scope.namespace value has been set to match the kubernetes namespace containing the CloudBees core components. After
helm install --namespace ingress-nginx --name nginx-ingress stable/nginx-ingress --version 0.23.0 --set rbac.create=true --set controller.service.externalTrafficPolicy=Local --set controller.scope.enabled=true --set controller.scope.namespace=cloudbees
It will take a few minutes following the installation of this helm chart for the ingress-nginx
service to resolve its external ‘Load Balancer Ingress’ hostname. The following command is used to check on the status of that value (labeled ‘Load Balancer Ingress’ in the output):
kubectl describe service nginx-ingress-controller -n ingress-nginx
With the ingress controller in place, it’s now time to deploy the helm chart containing the CloudBees Core components. The following command will install the those pieces via a helm chart. The command has a placeholders which a specific values will need to be plugged in for:
helm install cloudbeescore --set cjocHost=<lb-ingress-hostname> --namespace cloudbees
In this case, the namespace for the CloudBees Core deployment has been set according the the namespace we originally created with the kubernetes cluster for this purpose. It is also the same namespace referenced as the ‘controller.scope.namespace’ as configured during the installation of the nginx ingress controller helm chart. The value which will need to be replaced depending on the specific installation is the <lb-ingress-hostname>
. Please replace this placeholder with the value of the ‘Load Balancer Ingress’ hostname from the output of the previous command. Once this command has been executed, the progress of the rollout can be monitored via this command:
kubectl rollout status sts cjoc --namespace cloudbeesWait for output of this type (ID after 'cjoc-' will change):
statefulset rolling update complete 1 pods at revision cjoc-59cc694b8b...
Once the ‘rolling update complete’ message is displayed, run this command to retrieve the initially generated admin password for the CloudBees Core instance:
kubectl exec cjoc-0 cat /var/jenkins_home/secrets/initialAdminPassword --namespace cloudbees
Save the value of this output, and navigate to the public URL of the CloudBees Jenkins Operations Center (CJOC): http://<lb-ingress-hostname>/cjoc
. The <lb-ingress-hostname>
placeholder will need to be replaced with the value from the kubectl describe service
command utilized previously. Logging in as ‘admin’ with the password revealed by the previous command should kickoff the setup wizard for CJOC. Unless a more permanent license has already been procured, a trial license should be requested via the form within the wizard to get started.
Conclusion
The process outlined above will bring up a brand new Kubernetes cluster via VMware Cloud PKS and install the necessary components for CloudBees Core within that environment. In subsequent blogs, processes will follow for creating CI/CD pipelines within a Jenkins master and utilizing a webhook to trigger a pipeline.
CI/CD with CloudBees Core on VMware Cloud PKS Series:
- Deploying CloudBees Core on VMware Cloud PKS
- Building a CloudBees Core Pipeline which Deploys to a VMware Cloud PKS Cluster
- Triggering a Jenkins Pipeline on a ‘git push’
This post was authored with the assistance of Jeff Fry, Senior Business Development Engineer at CloudBees.