Continuous Integration: A Love Story

Creating a CI Environment with AWS, Docker, and GitLab CI

For the past couple months I have been exploring and learning all about the wonderful world of continuous integration. I must say, it’s incredible. I may be putting my voice into a crowd of thousands of developers on this one, but it truly is something I believe most applications should be doing. Maybe it’s just my soft spot for creating and learning automatic processes but man this stuff is cool. I don’t want this tutorial to be a verbose monster and just say that tasks that need to be completed and point towards documentation and tell you my insights when I did this. If you have questions please feel free to comment and I will try my best to explain!

I’d like this post to be a starting framework to get whatever project you are working on in a continuous integration environment. Our final goal is to build a process that does that following:

  1. Gitlab kicks off a build process in the background when it senses a ccode commit (2000 free minutes of server usage at Gitlab — as a student I approve of this)
  2. Gitlab builds a Docker container
  3. Gitlab deploys it to Amazons EC2 Container Service
  4. Gitlab deploys to an Elastic Beanstalk instance

Notice how the only thing we really had to do (after initial setup of the CI environment) was commit our code. GitLab and AWS handles everything else. Within minutes you can have your committed code and already being shown on a running server. This is obviously a basic outline, from here you can expand it to only deploy after all Unit tests pass, hook it up to Amazon Simple Email Service, to email developers with unit test reports. Deploy to a development environment, spin up a couple EC2 instances that run load tests, then deploy to your production environment (I really love the cloud). If you want to get fancy you could produce an AWS Lambda function that hooks up to your slack channel so developers can use a command like:

/deploy {COMMIT_HASH}

to deploy a specific commit to environments. If there is any interest for me to also have a guide on any of the above extra stuff, I could look into possibly producing one. I want to keep this specific post a starting point for all your CI adventures.

Step 1: Containerization

I’m going to be producing this project in Node.js but the only difference between this project any other will be the Dockerfile that is produced, so if you want to do this for Java, or Python, or whatever application you need deployed to a server, make sure you create the appropriate Dockerfile to create a proper docker container. If you have no previous experience with Docker I highly suggest looking at the at least the basic tutorial. Essentially Docker allows our applications to be as if they are OS agnostic.

DockerFile

This file builds our project into a container that can be run as an image on the server, for Node its relatively simple.

FROM node:boron

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json .
# For npm@5 or later, copy package-lock.json as well
# COPY package.json package-lock.json .

RUN npm install

# Bundle app source
COPY . .

EXPOSE 8080
CMD [ "npm", "start" ]

Now build your docker container, from within your project directory.

docker build . --tag ci-demo

You can tag it with whatever name you want, but eventually you will have to re-tag the image to contain the repository address of your container service (I will be using Amazon’s container service). You can think of the repositories of containers, exactly like git, except entire applications rather than the code for them.

You can then run your docker container with this command:

docker run -p 8080:8080 ci-demo

This will run your container. From here you can connect to your docker instance, by opening a browser and going to the docker-machine address with the correct port (8080 in this example) being used.

Step 2: Creating our AWS Container Repository

The service (EC2 Container Service) will house our images for a couple cents a month if you wish to keep them there.

Hit that Create repository button!

Follow the steps (Its like 2 seconds). And you’ll be given a link that looks like this, depending on the name you gave it.

 {YOUR_AWS_NUMBER_HERE}.dkr.ecr.us-west-2.amazonaws.com/ci-blog

It will also tell you how to re-tag your images to allow it to be published to this repository, you can do this now!

Step 3: Creating an Amazon User

We need to create a user that has all the permissions that is required for this process, and nothing more (as is good security habits). Although I am super bad for just slamming a user with all the permissions so I can get it to just work, and the take away all the ones that are not needed. But you are not I friends, you are better than I, so do this correctly the first time. Just create a user with no permissions to start and add them as you require them. You can find the method to create users here.

You will need to give your user a policy to allow it to push and pull from the Repository as well. You need to edit the policy of the ECR to also allow the user to push and pull from the repository. Meaning both ends of the transaction need to be given permission! Policy Examples for the repository are here.

Note: You may also need to add the permission to initially start the repo to the user

 ecr:InitiateLayerUpload

The IAM user, and permission stuff was the most confusing to me but once you play around with it you will start getting why its there. I suggest just playing around with these users and policies.

Step 4: GitLab CI.

.gitlab-ci.yml

Please find my file here, and documentation here.

Upon adding this file to your git project, this tells GitLab that you want to use the CI environment that they have set up. When you first start you most likely will use their shared runners. Runners are servers that will be the machine that builds your application, these runners are usually paired with some Docker image to easily bring in all the tools you need to do the tasks you dictate in each stage (a stage is just a step). In my first stage I use the ekino/docker-buildbox:latest-dind-aws image as it includes in itself the AWS CLI, and Docker to help me push my built docker containers to the EC2 Container Service. The script is essentially just a series of bash commands you want the runner/image to execute.

In the build stage I do the following:

$(aws ecr get-login)

This will use my user credentials within the system environment variables to log into the correct AWS Repository, so you can properly push the docker container. You can declare your system environment variables within the GitLab console. You can find it within the Project Settings > Pipelines, scroll down and you should find something that looks like this:

Environment variables.

The environment variables for AWS user configuration use specific names, you can find them here. Fill these environment variables in with the user you created.

Step 5: Elastic BeanStalk

You can download the EB CLI by following the instructions here.

Once installed type while in your project directory.

eb init

Follow the prompts, making sure to let it know you are using a docker environment, and to use the region you have been using throughout this entire tutorial. No need for an SSH connection (unless you really want one). Create a new application and name it whatever you’d like.

It should have created a directory called .elasticbeanstalk inside you will find config.yml. You’ll notice that my file has a section about an artifact. Please copy this. As well specifying the environment to some name you may choose.

deploy:
artifact: Dockerrun.aws.json

Once this is all finished, open your AWS console and head towards the AWS Elastic Beanstalk Service. Once there you will see the application “slot” there (if you are on the correct region).

Create a new environment within your application.

  1. Create a webworker
  2. Use a predefined configuration: Multi-Container Docker (This is good practice, even if you are only using one container)
  3. Envrionment Type: Single Instance (I like saving money when playing around)
  4. Give the environment a name the same name you used in the config.yml I referenced above.
  5. Now it will ask you what kind of VM you want to use. Since we are just playing around use the defaults, with a t1.micro VM.
  6. Use the sample application for now.

Once completed, within 5–8 minutes you should have a sample application running at the URL provided! Now to get your application into this Beanstalk instance.

Dockerrun.aws.json

My example can be found here.

This is the file that Elastic Beanstalk will use to define our docker application. You can find the documentation for this file here. Its essentially the only file we need to upload to our EB instance as in it contains the address of the repository of our image. Which means it needs to have authentication to grab this image, AWS specifically has documentation on it so please follow it here. Please note that EB instances should make an S3 bucket (Blob storage) for the environment, which is where you should store your dockercfg.json file (The file that holds the container authentication information) Which means you will need to give your user permission to read from that S3 bucket! Yay permissions!


That’s it! Now I could go into endless details about each step, but I realize that the documentation that AWS gives is pretty verbose and great. Sometimes it can be a bit daunting to start something like this, so again I want this post to be a framework to point you in the right direction to getting a CI environment started. With this setup, whenever you commit GitLab will trigger a build, which will build a docker container, deploy it to your ECR, then from there it will call eb deploy which will deploy the Dockerrun.aws.json file (as it was defined in the config.yml file) which defines our docker service. Elasticbean Stalk will then pull from the ECR and run using the configurations given to it from the Dockerrun.aws.json file.

Thank you for taking the time to read through this and I wish you all the best on your CI adventures.

Cheers!

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.