Going Great Guns with GoCD — Part 2

Prakash Shanbhag
6 min readAug 8, 2020

--

In the previous post, we covered how we can watch for source code changes, build the docker image, verify, and create/update the infra resources using terraform. In this part of the series, we would go about deploying the built image on the k8s cluster. In this series we will deploy our demo spring-boot service (myawesome-service-1) on k8s cluster, connect to the RDS resource that we had created via terraform and make the service accessible on the internet via a load balancer

GoCD Kube agent

To deploy the service on k8s, we would first need a host from which we could connect to k8s API. So we would need to first create a GoCD agent that has all the pre-requisite stuff installed listed below

  • kubectl CLI : Tool to control the k8s cluster. We can deploy/update cluster using this tool
  • kubecfg: To create the deployment YAML/JSON file we would be using the jsonnet and finally process them to get our final YAML/JSON file which can be applied.
  • python: We would be combining all the deployment steps in a python script. (You could use bash or any other language too)

In addition to the above software, we also need to copy all the config files that will help kubectlidentify the clusters. Typically it’ll be the ( .kube )folder with all the cert files and server IP. So we do below steps to create an image for GoCD Kube agent and push it to AWS ECR

$ aws ecr create-repository --repository-name 479580041174.dkr.ecr.ap-south-1.amazonaws.com/myawesome-kube$ git clone https://github.com/praks-1529/myawesome-gocd.git$ cd myawesome-gocd/elastic_agents/base_images/kube$ docker build -t myawesome-gocd-agent-kube:1.0 .$ aws ecr get-login-password \
--region ap-south-1 \
| docker login \
--username AWS \
--password-stdin 479580041174.dkr.ecr.ap-south-1.amazonaws.com/myawesome-gocd-agent-kube
$ docker tag <image_id> 479580041174.dkr.ecr.ap-south-1.amazonaws.com/myawesome-gocd-agent-kube:1.0$ docker push 479580041174.dkr.ecr.ap-south-1.amazonaws.com/myawesome-gocd-agent-kube:1.0

Once the image is pushed, add the GoCD agent in the list of elastic agent configured. GoCD -> Admin -> Elastic Agent Configurations -> Add ( The discussed in details in the previous series)

Additionally, before pushing the image you can test if the image is successful by creating the container from the image, get into the container and try kubectl

$ docker run -itd --name gocd-agent -e CI=true -e GO_SERVER_URL=https://dummy:8153/go --privileged myawesome-gocd-agent-kube:1.0
$ docker exec -it gocd-agent /bin/sh
$ kubectl get ns

Preparing for Kube deploy

The k8s deployment is declarative in nature. What that means is that each k8s object needs to have a manifest file that encapsulates every info on how the object must be deployed in the k8s cluster. This manifest file is generally either JSON or YAML format.

Generating the YAML/JSON can be done in a naive way where every object to be deployed on k8s has a YAML/JSON file associated with it. But if there are a large number of services, we can build a framework to generate these YAML/JSON file by taking certain mandatory parameters. In this way, we can avoid duplication of code by templatizing the core generator, and developers with limited k8s knowledge can onboard their service. One such popular framework is to use jsonnet. Many companies have built the full-fledged framework around this for the smooth deployment of k8s. In our case, we would be using kubecfg which is a thin wrapper over jsonnet. We will use kubecfg to convert this into a final JSON file that would be fed into kubectl
We have written a small utility python script ( deploy_kube) to do this for us. In addition, this script also replaces the ENV variables to the actual value. These ENV variables are set from the GoCD pipeline and have things like image, namespaces etc.

Passing the variables from previous stages

Since this is the final step of the pipeline we would surely need the resources we created in the build and terraform stage. Below resources are needed

  • Docker image: This was the image built as part of stage-0. If you remember we had created an artifact file with the image tag.
  • Terraform resources: We have created an RDS and security group. As part of the terraform processes, we output all the resource identifiers into a file which can be sourced to set the env variable

Each of these files can be obtained in the final stage as part of the fetch task prior to executing the actual command as shown in the next section. These variables are passed as environment variables to deploy_kube

Pipeline

Finally, we will update the GoCD pipeline to put all the above pieces together. Here we add a new pipeline stage kube_deploy to our previous pipeline which will invoke the python utility script after setting the ENV variables. You can check the complete pipeline here

Kube deploy stage

This stage consists of below tasks

  • Fetch the docker image tag to be deployed from the upstream build stage
  • Create a namespace: We use deploy_kube python utility to create a namespace where the service will be deployed. It is generally a good practice to use namespace as it allows us to do logical separation of the cluster into multiple small parts.
  • Create a deployment: We use deploy_kube and create a deployment. As part of this deployment object, we specify the replication details and the docker image that needs to be deployed.
  • Create service: We use deploy_kube to create a service. Since we need a load balancer to access the service, we use a Loadbalancer type of service that will spin up a class AWS ELB routing traffic to the pods. Also, we can attach security groups to the new ELB created by assigning value to service.beta.kubernetes.io/aws-load-balancer-extra-security-groups annotation while creating a service. The security group in itself is created during terraform_apply stage ( tf file terraform/elb_sg.tf ). As part of this security group, you can whitelist all the inbound IP’s which are authorized to access the service. In our case, I used 0.0.0.0/32 to open traffic from all the IPs for demo purpose but not advisable in production

If the pipeline is successfully run you must see something like below

Successful completion of Kube deploy stage

Also if you see the status of the k8s cluster you must see these k8s resources getting created.

k8s objects at a glance
k8s service object configuration
k8s deployment object configuration
k8s pod object configuration

Finally, time to check if the service works as expected

Sample curl request

Few things we would want to do differently in the production level setup

  • Security: Instead of having passwords like passwords as part of your pipeline code, we could fetch it from secret vault store at deploy stage
  • Security: The k8s cluster itself is open to the internet. We could block it and open it only for certain office IPs
  • The load balancers can be given a cname instead of DNS name so that we can update the LB behind the scenes
  • The services can be of type NodePort if it is accessible only from within the cluster.

--

--

Prakash Shanbhag

A software engineer by profession who like to learn and share