Continuous Integration and Deployment with Gitlab, Docker-compose, and DigitalOcean

My strong suite is not DevOps, however I challenged myself to setup automatic building, unit testing, and docker deployment to a server of choice.

Continuous integration basically means whenever you push a commit or make a pull request to certain branches it will build the files and run unit tests and inform you if the merge is worth it or if the files have a problem.

Continuous deployment takes that build and deploys it to a server for you automatically.

Most CI/CD solutions cost money or perhaps allow free versions for open sourced projects. The fastest one I’ve heard of is semaphore, but I went with the free Gitlab built in CI.

NOTE: This may not be the perfect setup, but I had a hard time finding good guides on how to set it up and want to guide you through my discoveries.

Getting Started

  1. Host your git repo in Gitlab. It can be either private or public and still use the free CI. NOTE: Make sure it’s name is all lowercase. A mistake I wish I had not made.
  2. Create a .gitlab-ci.yml in the root of your project. This will trigger the CI engine.

In the gitlab-ci file you will define stages of activity such as compiling and building the project, running unit tests, building a docker image, and logging in remotely to your server and pulling down the image and rerunning it.

For each stage you can list the branches you want to activate it, what docker images you want to work from, and what terminal scripts to run. After each stage, Gitlab will tell you if the process succeeded or threw an error.

Gitlab provides some default environment variables you can use in your scripts such as your user and pass for their private docker registry, repo name and path, and build tokens. You can also set your own in the variables key. We’ll start with the build.

Building Stage

Opening the file we write these lines.

cache:
key: "$CI_BUILD_REF_NAME node:8-alpine"
paths:
- node_modules/
stages:
- build
- test
- release
- deploy
build:
stage: build
image: node:8-alpine
variables:
NODE_ENV: "development"
before_script:
- apk add --update bash
- apk add --update git && rm -rf /tmp/* /var/cache/apk/*
- npm install
script:
- npm run build
artifacts:
paths:
- server/
- public/

The cache will remember the state of your code between stages and also CI runs using a key. This means it won’t download the node_modules every time we push a commit. The CI_BUILD_REF_NAME is our branch name. To change the cache in case you need to update the node_modules just provide a different string.

Next we can list out the stages we plan on using in the order we want.

Finally we begin our build stage. You name the stage for UI purposes, choose what docker image to work with and run your before_script, script, and after_script.

Artifacts are the results of a stage that you want to carry over into other stages. In this case I want my resulting modified public and server folders to be intact for my next few stages.

Testing Stage

In our next stage we set some environment variables and use a service. I’m using Feathersjs and it’s looking for a DATABASE_TEST_URL to be set in the config. My unit tests won’t work unless it has a database connection, so we are injecting the mongo service into our stage. Gitlab has several services especially for databases.

When you are connecting to their mongo service you use mongodb://mongo instead of mongodb://localhost:27017.

test:
stage: test
image: node:8-alpine
variables:
DATABASE_TEST_URL: "mongodb://mongo/dbname"
NODE_ENV: "test"
services:
- mongo
script:
- npm run mocha

Release Stage

In this stage we want to take our compiled files and generate a docker image that we will store in our private Gitlab registry. This requires your project to have a Dockerfile to build from.

The only property defines what branches will trigger this stage.

release:
stage: release
image: docker:latest
only:
- "master"
services:
- docker:dind
variables:
DOCKER_DRIVER: "overlay"
before_script:
- docker version
- "docker info"
- "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
script:
- "docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest --pull ."
- "docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest"
after_script:
- "docker logout ${CI_REGISTRY}"

An important command is

"docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"

Here we are logging into the private Gitlab docker registry using environment variables Gitlab provides. Make sure to use double quotes “” instead of single quotes '', if you want to use environment variables.

NOTE: Environment variables don’t work inside of single quotes.

CI_REGISTRY points toregistry.gitlab.com if you’re using the free version and CI_PROJECT_PATH is the name of your username and repo name. In this case

${CI_REGISTRY}/${CI_PROJECT_PATH}:latest = registry.gitlab.com/your_user/your_repo:latest

After we’ve built the docker file we push it to our private repo.

Deployment Stage

Deployment stage can either use ssh or sshpass, first I will show you sshpass then plain ssh. I recommend ssh over sshpass because it allows you to not get bombarded with login requests from bots and malicious humans.

1. Using sshpass

In this stage we download sshpass so we can log into our remote server.

However there is one unique issue we face since we are using docker-compose. We must send the docker-compose.yml file to the server and any files it depends on such as our environment.env file.

So I create the environments.env file passing in my Gitlab env variables. You can set these by going to your repo > settings > pipelines > and scrolling to secret variables

or

https://gitlab.com/your_user/your_repo/settings/ci_cd

deploy:
stage: deploy
image: gitlab/dind:latest
only:
- "master"
environment: production
services:
- docker:dind
before_script:
- apt-get update -y && apt-get install sshpass
script:
- printf "DATABASE_URL=${DATABASE_URL}\nPORT=80\n" > environment.env
    - sshpass -p "${DEPLOYMENT_SERVER_PASS}" scp -o StrictHostKeyChecking=no -o PreferredAuthentications=password -o PubkeyAuthentication=no ./environment.env ${DEPLOYMENT_SERVER_USER}@${DEPLOYMENT_SERVER_IP}:~/
 - sshpass -p "${DEPLOYMENT_SERVER_PASS}" scp -o StrictHostKeyChecking=no -o PreferredAuthentications=password -o PubkeyAuthentication=no ./docker-compose.autodeploy.yml ${DEPLOYMENT_SERVER_USER}@${DEPLOYMENT_SERVER_IP}:~/
- sshpass -p $DEPLOYMENT_SERVER_PASS ssh -o StrictHostKeyChecking=no -o PreferredAuthentications=password -o PubkeyAuthentication=no $DEPLOYMENT_SERVER_USER@$DEPLOYMENT_SERVER_IP "echo ${$DEPLOYMENT_SERVER_PASS} | sudo -S ls && docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}; sudo docker-compose -f docker-compose.autodeploy.yml stop; sudo docker-compose -f docker-compose.autodeploy.yml rm web --force; sudo docker pull ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest; sudo docker-compose -f docker-compose.autodeploy.yml up -d"

Next we use scp to login to our server and copy the environment and docker-compose file to our server home directory.

> These options are necessary to bypass the ssh security and just use a password to login.

-o StrictHostKeyChecking=no -o PreferredAuthentications=password -o PubkeyAuthentication=no

When we finally ssh into the server, we use sudo (without interaction) in case we are not root, and log into our private Gitlab docker registry. The we stop docker-compose and remove any existing containers. Pulling down the latest docker image from the repo we the launch docker-compose again.

a) DigitalOcean Server Steps for sshpass

  1. Create a digitalocean instance using the docker one-click image
  2. We need to install docker-compose to our digital ocean instance. ssh into the instance and run
sudo apt-get update
sudo apt-get -y install python-pip
sudo pip install docker-compose

3. Then edit sshd_config

nano /etc/ssh/sshd_config

4. At the bottom of the file change PasswordAuthentication to yes

PasswordAuthentication yes

5. Run reload ssh to launch the changes.

b) Save environment variables in Gitlab

Now we must set our environment variables in the Gitlab CI Secret Variables Section

repo > settings > pipelines > and scrolling to secret variables

or

https://gitlab.com/your_user/your_repo/settings/ci_cd

DATABASE_URL=mongodb://db/dbname
DEPLOYMENT_SERVER_IP=your_ip_address
DEPLOYMENT_SERVER_PASS=your_user_password
DEPLOYMENT_SERVER_USER=your_server_user

I’m using DATABASE_URL since I’m using mongo docker image in my compose file. The ip address will be the one you got from Digitalocean. The user and pass are your credentials to sign into the server. Digitalocean recommends you don’t login as root but instead create a separate user and give him sudo privilages.

Digitalocean’s guide on how to create a sudo user.

Our server is now ready for continuous deployment.

2. Using ssh

Our deployment config is notably shorter. Here is are manually creating a private key using the contents of the environment variable DEPLOY_SERVER_PRIVATE_KEY. Next we are loading our server ip into our known hosts so digitalocean won’t prompt us. Then we scp and finally ssh into our instance and run our commands.

deploy:
stage: deploy
image: gitlab/dind:latest
only:
- "master"
environment: production
services:
- docker:dind
before_script:
- mkdir -p ~/.ssh
- echo "$DEPLOY_SERVER_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H $DEPLOYMENT_SERVER_IP >> ~/.ssh/known_hosts
script:
- printf "DATABASE_URL=${DATABASE_URL}\nPORT=80\n" > environment.env
- scp -r ./environment.env ./docker-compose.autodeploy.yml root@${DEPLOYMENT_SERVER_IP}:~/
- ssh root@$DEPLOYMENT_SERVER_IP "docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}; docker-compose -f docker-compose.autodeploy.yml stop; docker-compose -f docker-compose.autodeploy.yml rm web --force; docker pull ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest; docker-compose -f docker-compose.autodeploy.yml up -d"

a) Server Steps for ssh

We use a different setup for ssh.

  1. Create a digitalocean instance using the docker one-click image and include your computer’s ssh key.
  2. We need to install docker-compose to our digital ocean instance. ssh into root on the instance and run
ssh root@your_ip
apt-get update
apt-get -y install python-pip
pip install docker-compose

3. Create the .ssh directory if it doesn’t already exist.

mkdir ~/.ssh
chmod 700 ~/.ssh

4. Create a ssh key pair. Leave all the question/answer fields blank. Don’t set a password or alternate name.

ssh-keygen -t rsa

5. Display the public key using cat and manually copy it by highlighting the text with your mouse and copying it. Copy everything including the `ssh-rsa` to the machine name (Ex: root@your-server-name). Save the public key into your authorized keys

cat ~/.ssh/id_rsa.pub
nano ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

6. Copy your private key contents, don’t share this with anyone. We’ll use this variable in our Gitlab CI.

cat ~/.ssh/id_rsa

b) Save our secret environment variables in Gitlab

repo > settings > pipelines > and scrolling to secret variables

or

https://gitlab.com/your_user/your_repo/settings/ci_cd

DATABASE_URL=mongodb://db/dbname
DEPLOYMENT_SERVER_IP=your_ip_address
DEPLOY_SERVER_PRIVATE_KEY=the_private_key_contents

Our server is now ready for continuous deployment.

Docker-Compose file

In our docker-compose file, the one we copy to our server, we need to reference the image in our private Gitlab registry.

db:
image: mongo
expose:
- "27017"
- "37017"
command: --smallfiles
web:
image: registry.gitlab.com/your_user/your_repo:latest
ports:
- "80:80"
env_file:
- environment.env
links:
- db:db

You made it! Takes a lot of work to setup but hopefully I’ve saved you several hours of pain. Now whenever you push any changes to the master branch in a few minutes your DigitalOcean server will be updated with the results (assuming the unit tests and build passes).

You can see an example of this CI in a repo here.

codingfriend/Feathers-Vue