A CI/CD pipeline to deploy containerized applications to Kubernetes using Bamboo — Part 2

Sugendh K Ganga
6 min readJul 31, 2018

Previous iteration of this article portrayed a high level overview of how to implement a CI/CD pipeline to deploy containerized applications to Kubernetes. In this post, I will walk you through the steps to get your first pipeline up and running.

Prerequisites

  1. Bamboo Server

2. A Kubernetes cluster (I created mine on Azure — AKS)

3. An Ubuntu machine which will serve as a remote agent to bamboo machine.

4. A containerised application ready for deployment to Kubernetes. We are going to use helm for deployment. Click here to view the sample code

5. A Docker hub account, which will serve as an image registry. Private registries such as nexus repository manager, AWS ECR or Azure container registry can be used if needed.

6. A private helm repository to which helm packages from the build will be pushed. We used a helm-local repo in Jfrog Artifactory

Promotion Model

Before we get into creating build plan and deployment project, let’s consider the promotion model. Typically, there’s Dev -> Test -> Staging -> Prod or something similar. In the case of Kubernetes, minikube/docker host is the local development environment.

So what about Staging & Prod? You could spin up separate clusters, but that could end up being expensive. Namespaces in Kubernetes can be security boundaries, but they can also be isolation boundaries.

When you spin up a Kubernetes cluster, besides kube-system and kube-public namespaces (which house the k8s pods) there is a “default” namespace. If you don’t specify otherwise, any services, deployments or pods you create will go to this namespace.

We don’t have to create these namespaces beforehand, because our pipeline will create them if not already created.

Now that we have what we need, let’s get started!

Continuous Integration

Let’s create a build plan in bamboo. We need to link a repository to the plan. I used Github. Next page will ask you, where to run this build. Select ‘agent environment’ and click on create button

Clicking on ‘configure plan’ button will take you to the plan configuration page. Click on default job to add tasks to this job.

Now, Start adding tasks, first of which is source code check out. This task clones your already linked repository to build plan’s workspace.

Next compile your code by running a script task (shell)

Next add a Docker task to build the Docker image from Dockerfile.

Next add another Docker task to run the Docker containers to bring up the application.

Next, we can perform tests if any. We can add another script task and can call any script or can have it run as inline script similar to the previous (script) tasks.

Now, configure a Docker task to push the images to Docker registry (dockerhub)

Next, configure a task to update the helm chart version. A shell script which does this can be found in github repository. Configure the task as below.

Next, configure another script task that will package the helm chart so that it can be published to a private helm repository.

Finally, configure another task to publish the helm package to a private helm repository (Jfrog Artifactory).

Also, create plan variables if any under Plan configuration.

Continuous Delivery

Now, Let’s create a Bamboo deployment project to manage deploying helm releases to our Kubernetes cluster. Name it accordingly and link it to the build plan that we just created.

Now, let’s add our first environment to the newly created deployment project.

Next task is to set up tasks in our newly created environment, so that it deploys the application to the corresponding Kubernetes environment. Click on ‘Set up tasks’ button.

Configure a task as shown below to deploy the latest helm release to the test environment. This is performed by a script in the github repository.

Save it and proceed forward to add a trigger to deploy the release as soon as the build is succeeded.

Finally, create following variables to complete the test environment configuration.

Create two more environments (Staging & Production), with similar configurations. Choose a trigger to automatically deploy the release to staging on a successful deployment to the previous environment (test)

And when it comes to production environment, let’s not create any triggers so that deployments to production environments will have to be done manually.

deployment project — Configuration

Trigger your first build!

Now, it’s time that we finally trigger our first build.

Any new commit to the repository will trigger a build, or you can manually trigger the build plan.

The build on successful completion will trigger a helm release to the test environment which on completion will trigger a deployment to the staging environment.

You can then manually promote the build to production environment.

Now let’s ssh into our agent machine and see the deployed helm releases.

Running ‘helm ls’ will give you a list of deployed helm releases.

Deployed resources on test environment
Deployed resources on staging environment
Deployed resources on Production environment

The hello-world application can be accessed at the three external IP addresses (port: 8080)

Sample .net core application

What Next?

Now that you have successfully deployed your application, how do you keep track of all your releases?

Don’t you think it’s important to monitor your Kubernetes cluster? What about intrusion prevention? What if the cluster runs out of resources? How important is cluster security?

All these questions will be answered in Part 3 of this article! Until next time, Keep learning!

--

--