Setting up a CI/CD Pipeline with Vapor, Docker, Kubernetes, GitHub and CircleCI.

In the following chapters you will set up a Vapor and/or Spring based web project to be continuously tested and deployed via Circle CI onto a Kubernetes cluster. Be aware that this post will not handle the topic of securing your cluster nor setting up a proxy like ingress. It is solely about how you can set up a working test and deployment pipeline from the beginning. The deployed environment should therefore be treated as a staging environment and not as production.

Setup a Vapor Project

The foundation for our project is the web framework Vapor 3.0 which is written in Apples programming language Swift. You will find the complete Getting-Started-Guide and further documentation here. Please make sure that you have the necessary tools installed, for macOS or Ubuntu.

Create a new project, build and run it with

vapor new <YourProjectName> --template=api --branch=beta
vapor build
vapor run

and then access your webserver via http://localhost:8080. This template contains models which are stored in a SqlLite Database. In the non-release environment it is created as an in-memory storage. To get it production ready you have to make sure that the application is fed with the correct credentials and links to the database.

Google Cloud Platform / Kubernetes Setup

Setting up your own Kubernetes Cluster is not a trivial task. For my quick setup i decided to use the Google Cloud Platform to run a kubernetes engine. For creating the kubernetes engine you need a google account.

Note: The kubernetes engine doesn’t come for free. So be aware that it costs you money.

To create a Kubernetes Engine follow the outlined steps in here, including the “Creating a Kubernetes Engine cluster” part. Everything afterwards is not necessary for this setup.

Note: I recommend you though to use the local shell (and therefore install the gcloud package) because it will install the kubernetes cli, and gcloud configures it so it will communicate with your cluster.

After completing the steps of the Quickstart you are ready to deploy your code via kubernetes.

Create the Circle CI Pipeline

I assume that you have a GitHub and CircleCI Account setup to use the following settings for a CI/CD pipeline.

The Vapor template already comes with a predefined circle.yml to build and test the application. For the sake of this post just replace it with the following code.

version: 2

jobs:
build:
docker:
- image: norionomura/swift:swift-4.1-branch
steps:
- checkout
- run: apt-get update
- run: apt-get install -yq libssl-dev pkg-config
- run: swift build
- run: swift test
workflows:
version: 2
tests:
jobs:
- build

It will download a docker image, install necessary dependencies (libssl-dev and pkg-config are necessary for the new Swift-NIO), build and test the app.

Kubernetes works with Docker Images and needs a Docker Image Repository to pull the Images from. To create this sample pipeline just create an account in the official Docker Hub.

Create a Docker Image and upload it to the Docker Hub

In the next step we want to create a Docker Image and push it to the Docker Hub. Or every other repo you want to use, i.e. the Amazon ECR. Firstly we need a Dockerfile to create the image. It contains two steps:

  1. It builds a temporary image with all the heavy lifting which is necessary to build the actual project
  2. The production-ready image will be created based on the compiled application and a minimal image
# Build image
# Get the base image
FROM norionomura/swift:swift-4.1-branch as builder
# Install all necessary dependencies
RUN apt-get -qq update && apt-get -q -y install libssl-dev pkg-config
# Switch into the WORKDIR and copy it into the build image
WORKDIR /app
COPY . .
# Create a build folder to store the necessary data for the actual production image
RUN mkdir -p /build/lib && cp -R /usr/lib/swift/linux/*.so /build/lib
RUN swift build -c release && mv `swift build -c release --show-bin-path` /build/bin
# Production image
FROM ubuntu:16.04
RUN apt-get -qq update && apt-get install -y \
libicu55 libxml2 libbsd0 libcurl3 libatomic1 \
libssl-dev pkg-config \
&& rm -r /var/lib/apt/lists/*
WORKDIR /app
# COPY Config/ ./Config/
# COPY Resources/ ./Resources/ # if you have Resources
# COPY Public/ ./Public/ # if you have Public
COPY --from=builder /build/bin/Run .
COPY --from=builder /build/lib/* /usr/lib/
EXPOSE 80
CMD ["./Run"]

In the next step the docker image will be build by circle ci and uploaded to the image repository. Add the following to the jobs: section of the circle.yml to add the functionality.

version: 2

jobs:
build:
docker:
- image: norionomura/swift:swift-4.1-branch
steps:
- checkout
- run: apt-get update
- run: apt-get install -yq libssl-dev pkg-config
- run: swift build
- run: swift test
push-to-docker-hub:
docker:
- image: docker:latest
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --update --no-cache curl jq python py-pip
- run:
name: Build Docker Image
command: |
docker build -t api .
docker tag api <yourRepo>/<imageName>:latest
docker tag api <yourRepo>/<imageName>:$CIRCLE_SHA1
docker login -u $DOCKER_USER -p $DOCKER_PASS
docker push <yourRepo>/<imageName>:latest
docker push <yourRepo>/<imageName>:$CIRCLE_SHA1
- persist_to_workspace:
root: ./
paths:
- k8s-*.yml
workflows:
version: 2
tests:
jobs:
- build

It does the following:

  1. We use the latest docker image so that we have every necessary dependency to build our own Docker Image.
  2. Because we can’t run Docker within a Docker image we use setup_remote_docker to resort to a remote instance.
  3. Install the missing dependencies.
  4. Build the actual docker image and push it to Docker Hub.

In the fourth step we rely heavily on the CircleCi environment variables where we store the credentials for the Docker Hub access and fetch the commit signature. The signature will be used to tag the docker image. Please note that you need to provide a repository name for your docker image.

To activate the functionality we have to extend the workflows section as follows:

workflows:
version: 2
tests:
jobs:
- build
- push-to-docker-hub:
requires:
- build
context: dockerhub
filters:
branches:
only: master

Please notice that the step relies on the build step to be finished successfully and it will be only executed with every new commit (or PR) to the master branch. The context dockerhub relates to the environment where the DOCKER_USER and DOCKER_PASS are stored.

Deploy to your Google Cloud Kubernetes Engine

Now comes the tricky part. We need to execute three steps to deploy our image to our kubernetes cluster:

  1. Create a Kubernetes Deployment file
  2. Create a Google Cloud Service Account Key and add it to CircleCi
  3. Extend the CircleCi config to deploy the image to the kubernetes cluster

Let us start with the deployment file (in this case called k8s-deployment.yml) and explain it step by step.

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: <yourdeploymentname>
labels:
app: <yourdeploymentname>
spec:
replicas: 2
selector:
matchLabels:
app: <yourdeploymentname>
template:
metadata:
labels:
app: <yourdeploymentname>
spec:
containers:
- name: <yourdeploymentname>
image: <yourRepo>/<imageName>:$CIRCLE_SHA1
ports:
- name: http
containerPort: 8080
protocol: TCP

In this case we create a Deployment based on the current beta apiVersion (this functions like a base configuration file) with the name “vapor-deployment” and the label vapor. Those are used for referencing in the spec: section. replicas defines how many instances of your application will be deployed. Note: to have a rolling update you need at least to replicas for a deployment. selector relates to how the deployment knows which pods to manage. In this case via the label we defined in the metadata. The actual pod content is defined by the spec: section within template part. There we set the name of the container as well as the docker image containing our application.

Now the deployment configuration is ready and as a next step we want to enable CircleCI to rollout the deployment to the kubernetes cluster. To do so you have to create or enable the Google Cloud Service Account and add its key to CircleCI. The full tutorial can be found here. To get the necessary access key go to your online Gcloud console, navigate to API & Services > Credentials > Create Credentials > Service Account Key and create the JSON key for the Compute Engine default service account. This key will be used to authenticate with the google cloud platform. Please put it into CircleCIs environment variables as GOOGLE_KEY .

The last step is composed of extending the circle.yml so that the uploaded image will be rolled out in the kubernetes cluster.

deploy-to-staging:
docker:
- image: google/cloud-sdk:alpine
steps:
- attach_workspace:
at: /tmp/workspace
- run:
name: Install dependencies
command: apk add --update --no-cache libintl gettext openjdk7-jre
- run:
name: Install Kubectl
command: gcloud components install app-engine-java kubectl
- run:
name: Template k8s config
command: for i in /tmp/workspace/k8s-*.yml; do envsubst < "$i" > $(basename "$i"); done
- run:
name: Deploy to staging
command: |
echo "$GOOGLE_KEY" > key.json # Google Cloud service account key
gcloud auth activate-service-account --key-file key.json
gcloud config set compute/zone <zone>
gcloud config set project <project name>
gcloud container clusters get-credentials <cluster name>
kubectl apply -f k8s-deployment.yml
kubectl rollout status deployment/<metadata app name>

The docker image for this task is as lean as possible and we install the necessary libraries in the Install dependencies and the Install Kubectl run block. As you may remember there is a place holder in the kubernetes deployment file, $CIRCLE_SHA1, which will be filled by the Template k8s config step.

The deploy to staging block will do exactly what it is called. In this step the mentioned GOOGLE_KEY is used.

Now add the this step to the workflow and you are good to go. In my example the Google Key is saved in the k8s context.

workflows:
version: 2
tests:
jobs:
- build
- push-to-docker-hub:
requires:
- build
context: dockerhub
filters:
branches:
only: master
- deploy-to-staging:
requires:
- push-to-docker-hub
context: k8s
filters:
branches:
only: master

Expose the deployment to the internet

With the following kubernetes command you create a kubernetes service (in this case a load balancer) which connects to your deployment.

kubectl expose deployment <yourdeploymentname> --type "LoadBalancer"

After some minutes the loadbalancer will have an attached external IP which you can get via kubectl get service <yourdeploymentname>. Now you can use the fetched IP to access your deployment.

Conclusion

Now with every commit you make (dependent on the branch) your code will be directly deployed to your kubernetes cluster. Based on the number of replicas we will also have a rollout, it means that only one pod will be updated with the new version and only if this succeds, the second one will be updated. In case of a failure a rollback is executed.