Continuous Delivery in Moodah POS

Nicolaus Christian Gozali
Moodah POS
Published in
9 min readOct 10, 2019

more productivity from automated deployment

Automation is always warmly welcomed in software development and one which can be started with is the application’s deployment. This will enable development team to achieve greater speed and reliability in delivering products. Here I will share the deployment pipeline used by my group upon developing an ERP application called Moodah.

Introduction

There are many terms revolving around automated deployment to be familiar with. We will glance through some as necessary background information.

CI and the two CDs

Continuous Delivery is an approach to release new changes to customers quickly and reliably, it consists of implementing Continuous Integration and Continuous Deployment, or CI/CD for short.

CI is the set of practices done by developers to merge their work to the main branch as often as possible. This often involves validating, building and running automated test against application with the added changes. While CD is a method to deliver functionalities to customers through automated deployment.

Docker

For front end deployment we use Docker which is a platform for developers to run application through containerization. Containerization is increasingly popular because it is lightweight since it shares the kernel of the host machine, can be run on any environment and easily scalable.

comparison of container (left) and virtual machine (right)

Docker uses a Dockerfile that contains list of dependency instructions on how to run an application to build an image. A Docker image includes everything needed to run an application — the code or binary, dependencies, and commands to execute when it is run. Upon running an image, the running instance is called a container.

Why automate it?

There are several benefits to an automated deployment scheme in your app including: early bug detection — if there is an error in the local version of the code that has not been checked, a build failure occurs at an early stage that easily notifies developers to make a fix and less manual work — effort to deploy is reduced since CI scripts automate testing, builds and deployment so developers can focus on building features.

Our pipeline

Our application is composed of a React-based application for front end that is deployed using Docker to servers provided and a GraphQL Apollo server which is deployed to AWS Lambda. Both are hosted in a monorepo format on GitLab.

The first component of our automation starts with the GitLab’s job runner executing scripts written in gitlab-ci.yml file after triggered by a push to the repository. Here we run automated linting and tests to ensure errors are caught early to stop the changes from being deployed. We have slightly different routes for our front end and back end deployment.

React’s Path

For our front end app, there are 2 CI stages as shown in snippet of gitlab-ci.ymlbelow.

Note: code snippets in this post only shows enough for conveying the idea for brevity.

simplified frontend gitlab-ci.yml jobs

The first job is to test our app and displaying it’s coverage. If all tests pass then, runner will proceed to pull most recent image as cache to build current image for speed, then it is uploaded to a docker registry, which is a docker-image-equivalent for git repository hosting like GitLab.

Docker uses the following Dockerfile to tell it how to run our react application.

Dockerfile for frontend production environment

We will not go into details here, but the main take away is Docker will build our react application to static files then serve it with nginx when the image is run.

The last step is to run the image in a server. In our case, we are provided with servers and portainer, a GUI for building and running docker images, to ease deployment using Docker. We just need to add a new container from an image that we pushed to earlier in the CI script and run it.

List of running containers that acts as the front end server in Portainer

And it’s done, it can already be accessed in https://itprojectkitwo-staging.cs.ui.ac.id.

Apollo’s Path

For our back end app, we do not utilize Docker as deploying a serverless application to AWS Lambda is just running a line of command. Hence deployment starts and finishes at gitlab-ci.yml .

For back end there are 3 stages: linting combined with testing, deploying whole infrastructure if there is any change and lastly only updating the lambda functions, which is faster and does not change endpoint url.

Below are jobs for deploying to aws lambda and updating it’s functions.

simplified backend gitlab-ci.yml jobs

The above job will print output containing the endpoint.

Stack Outputs
...
ServiceEndpoint: https://o6w9uffa6d.execute-api.us-east-1.amazonaws.com/dev
...

Everything about .gitlab-ci.yml

Throughout development of the project, we have some slight adjustments to our CI script and its content will be discussed more in depth here.

Gitlab is a Git repository hosting services that also provides free CI service called Gitlab CI. It can be viewed as a script that is executed every time a push happens. In general, it includes running some tests as well as deploying the application. With the automation provided, developers can quickly detect errors from failing tests and if everything is well, deploy the application without any intervention.

All the configuration is put inside a file called .gitlab-ci.yml in the root of repository and below is our group’s simplified script that only includes jobs for production environment for the sake of brevity since other environments are similar.

Moodah POS Gitlab CI Script for production environment/master branch

A yml file comprises of key value pairs separated by colon where value could be singular or a list as indicated by the use of dash symbol. Indentation is strict as it indicates where a statement belongs to.

There are 6 statements with no indentation namely: stages, test frontend, test backend, release prod frontend, release prod backend and update prod backend functions. The first is a reserved keyword to define list of stages in the script while the rest are called jobs. Jobs in the same stage will run in parallel while jobs in next stage will only start after current stage is finished. In this case, there are 5 jobs to be done in 3 stages. The one who executes jobs is called a Runner.

Test Stage

The first stage is about testing and comprises of a job to test frontend and backend separately to reduce bug and errors in code by automatically executing all tests.

Since Moodah POS is a JavaScript application, node is required to run the tests. That dependency is fulfilled by the chosen image of node:10.16.3-alpine, the number 10.16.3 is the node version and the suffix alpine indicates a distribution of node image that is lightweight.

Both test jobs runs similar commands as specified under script: , it changes directory to respective application since our group uses monorepo, meaning multiple application are put on different folders in root of repository, then installs needed npm packages and runs the test with npm run test command.

Specific to frontend test, npm run gql-gen is executed, this is a specific thing to do for frontend applications that communicates with a GraphQL server. The last key value pair for both jobs is a regex to capture coverage percentage from job output and will be displayed in Gitlab as shown below.

backend test job output

Deploy Backend Stage

The next stage is deploy backend that consists of job deploy prod backend to deploy GraphQL JavaScript server to AWS Lamda with the help of serverless library.

Configuration for job is similar to test job with exception of some new key value pairs. The variables key includes a list of key value pairs which can be accessed within the job through the key name prefixed with a $ sign. In this case, $STAGE in script will be interpreted as prod. evironment key is to classify jobs as shown below in the Gitlab repository under section Operations > Environments.

jobs classified into environments

The only key specifies on what conditions should the job be executed, for this case, it is configured to only run on pushes to the master branch specified through key and if push includes changes to file backend/serverless.yml specified through keys ref and changes respectively. The reason for this is deployment generates a new endpoint url for the GraphQL server which change should also be mirrored in the frontend. Hence this will be only executed when absolutely necessary which is when the base server configuration specified in serverless.yml changes.

Release Stage

The last stage comprises of 2 jobs: release prod frontend to deploy a docker image to docker registry of UI Computer Science Faculty which then can be run by their servers to host the application and update prod backend functions which will update lambda functions whilst retaining endpoint url.

Since the frontend job deals with docker, a different image, docker:19.03.0,is chosen to provide dependency. At early stages of development, there were some problems on deploying images to the registry such as the one below, contents of key tags, services and variables are used to resolve the issue as pointed here. In short, tags key defines selection of runners to run the job while services key defines another Docker image that will be run on the job and is linked to the specified image in image key.

AppArmor detection and --privileged mode might break.
...
error during connect: Post http://docker:2375/v1.40/auth: dial tcp: lookup docker on 152.118.24.4:53: no such host

The content of before_script is to fix improperly set proxy server from UI Computer Science side. As the name implies, it is a set of commands executed before script .

The script for release prod frontend consists of three docker commands, first is pulling the latest image to pull an image to be used as cache to speed up the next command which builds a docker image from specifications provided in file ./frontend/Dockerfile . We won’t go into much detail on the contents of Dockerfile but in short it lists all necessary dependencies for our React application, compiles it and provide instruction on how to run it. The last command is to push the built image to UI Computer Science registry. Notice there is a variable called $CI_REGISTRY_IMAGE that is not defined under variables key, it is in fact an environment variable set through Gitlab repository under section Setting > CI/CD > Environment Variables. This is a suitable place to place sensitive values such as deployment-related keys.

Moving on to the last job, update prod backend functions updates lambda functions of our group’s GraphQL server, it has similar configuration compared to deploy prod backend with the exception that the command used is serverless deploy function, which only updates corresponding function without changin endpoint url. The reason for the second stage is that if the code for GraphQL server is pushed for the first time, CI will deploy first then start update after it finishes.

That is all for a more in depth explanation regarding the Gitlab CI script our group uses.

Environments

Now the details above is just one pipeline for development out of three environments which are adopted by our group, namely production, staging and development.

Thus the complete picture would be first combining features to deploy to development environment. Then deploying to staging for review with clients and finally to production when all is well and approved.

And.. that is it for the deployment pipeline used by our group. Hopefully, now you are more interested and familiar with implementing CI/CD for your project. Cheers 🎊!

--

--