A CI/CD pipeline using VSTS to deploy containerized applications to Kubernetes — Part 2
Part 1 of this article portrayed a high level overview of how to implement a CI/CD pipeline using VSTS to deploy containerized applications to Kubernetes. In this post, I will walk you through the steps to get your first pipeline up and running.
Let’s get started!
- Create a new project in VSTS
2. Import the code/get the repository ready
3. Create Kubernetes & Docker Registry service endpoints.
To create a Kubernetes service endpoint, go to Settings => Services
Click on create new service endpoint, select Kubernetes & create one. Provide K8’s cluster url and copy kubeconfig.
To create a Docker Registry service endpoint, click on create new service endpoint, select Docker Registry & create one.
4. Create a build definition
Go to Build & Release => Builds => New build definition => Select repository => Continue
When asked to choose a template select Empty-process.
Under Phase1 => agent queue, select Hosted Linux preview.
Next, we add all the steps under Phase 1 that will define the build definition.
So the first thing that we do here is build the Docker image from the Dockerfile.
For this we are going to add a Docker Task to Phase 1. To do that click on the ‘+’ icon and select the Docker task template from the list.
Choose Action: Build an image, specify path to the Dockerfile, provide a name to the image
Next, add another Docker/Docker compose task to run the application on the agent machine. This time, Action would be Run a Docker command, under Command box type in:
“run — name <container name> -itd -p 80:80 <dockerhub user>/<imagename>:$(Build.BuildId)”
Next, add another Docker task to push images to Docker registry, Docker hub in our case. This time, Action would be Push an image, Chose the registry type as Container Registry, choose the correct service endpoint and specify the image name.
Next, add another Shell Exec (Execute shell) task to perform any local tests.
Next, Install helm and kubectl binaries on agent machine. To do this add a Helm tool installer task to Phase 1.
Then initialize helm by performing helm init. To do this we use Package and deploy Helm charts task. Select Kubernetes connection type as k8s service connection, and select the appropriate service endpoint from the dropdown list. Under command field select init from the dropdown list.
Next, we package our helm chart and push it to a private helm repository hosted at Jfrog artifactory. Again choose Package and deploy Helm charts task.
Execute the following command to push the helm package:
curl -u${HELM_USR}:${HELM_PSW} -T ${chart_name_path} “${HELM_REPO}/${chart_name}” || errorExit “Uploading helm chart failed”
This concludes the build definition. Don’t forget to save your changes!
Below is a screenshot of a typical helm repository (Jfrog Artifactory)
Now, let’s move on to the release definition
5. Create a release definition
Click on Releases under Build & Release tab select new definition.
When prompted to choose a template, select Deploy to Kubernetes cluster. Then name your environment (Dev).
Then add arifacts by clicking on the Add Artifacts button and select Artifact type: build. Select the build definition that we just created and add artifact.
Now let’s configure our release definition. Under Phase1 => agent queue, select Hosted Linux preview.
Let’s configure this task to create the dev namespace in our kubernetes cluster.
Next add two tasks under it, to install helm and kubectl binaries on the agent machine and to initialize helm. These two tasks are similar to what we have already done in the build definition.
Next, add a shell exec task to import the helm repository from Jfrog Artifactory. Execute the following command
helm repo add helm https://<artifactory-domain-name>/artifactory/<reponame>
helm repo update
Above execute shell step can be avoided if you chose to do so, but you will have to specify path to the helm package when performing a deployment via helm in the next stage. Add a helm package and deploy task.
Specify connection type as k8s service connection and chose the appropriate service endpoint. Select upgrade command from the dropdown, namespace as development, chart type as name and type in the path to the chart (artifactory url.
Eg: http://<domain-name>/artifactory/<repository-name>/<package-name>.tgz).
Fill in the set values box to update values.yaml (eg:.repository=”<image-registry>/<image-name>”,image.tag=”latest”)
Create, two more environments Staging & Production with similar configuration. Below is a example of how your release definition should look like:
That Completes the release definition!
Trigger first build!
Now go to your build definition and trigger a build to see the entire CICD pipeline in action deploying the application to your Kubernetes cluster!
Access your Kubernetes cluster to see the helm release
helm ls
To see the service endpoints & deployments, execute:
kubectl get svc <service name> -n <namespace>
kubectl get deployments<service name> -n <namespace>
kubectl get pods <service name> -n <namespace>
Third and the final Part 3 of this articles answers the question what’s next! You will get a glimpse of what we are working on next, which will help you to simplify creation of the entire pipeline using APIs.