Nerd For Tech
Published in

Nerd For Tech

The Pipe Dream to Easy Deployment with Pipelines

“You may say I’m a dreamer but I’m not the only one”

- John Lennon

Imagine There’s No Deployment Struggles

Imagine as a developer, you write your code, test it locally, push it to the remote repository. From there, it automatically gets tested and if it passes all the tests it gets deployed to the production server. All the while you are being notified about any failures in the process via Slack. Imagine you don’t have to worry about configuring your production server manually. Imagine, it all just works, in production as it is locally.

This is not a dream intended to stay in the subconscious mind, it is a dream that requires action, knowledge, and the will to implement. On my own journey, I was plagued with projects which I invested my heart and soul into only to arrive banging my head against the Berlin Wall while trying to get my web application deployed and work out all the production bugs. Mid-head-bang I looked around, it was not only I, but all my fellow developers, struggling with similar issues. Then I heard it calling… that voice inside my head, it said, “there is an easier way.”

Descending into the darkness of a Searx rabbit hole, DevOps emerged as the one true church of salvation. According to Atlassian:

DevOps is a set of practices that works to automate and integrate the processes between software development and IT teams, so they can build, test, and release software faster and more reliably.

DevOps builds off of Agile Development to encapsulate and automate every part of the process from integration to continuous monitoring of the production server. This automation practices are known as CI/CD pipelines, or pipelines.

DevOps building off of Agile Development Pipelines, courtesy of source

There are an overflowing number of tools which allow one to create these pipelines. We will be trying out GitHub Actions, CircleCI, and TravisCI. In the future, I hope to expand this article to include Jenkins, Bamboo, and GitLab.

Imagination Turns into Planning

To start I created a base project which could be cloned in order to set up three distinct pipelines using each of the tools outlined above. This is not financial advice, has a plain UI with no styling but contains all the necessary parts to a basic Node app, including CRUD routes and tests. Additionally, Docker is used to allow for easy deployment in an independent environment.

Take some time to familiarize yourself with the project repo. Now, we are ready to stop imagining and start actualizing.

The Actualization

I chose GitHub Actions, CircleCI, and TravisCI due to their popularity and ease of integration. The pipeline as I imagined it was straightforward.

On push to the development branch the pipeline would:

  1. Checkout the code
  2. Install the dependencies
  3. Test the code on four different major versions of Node (10, 12, 14, 15)

On push to the production branch the pipeline would start in the same way as above. Then, if the code passed the tests the pipeline would:

  1. Checkout the code
  2. Build the Docker image
  3. Push the image to Heroku

Additionally, I wanted to receive a Slack notification of any failure along the way or a success notification if the pipeline executed with no failures.

The Dockerfile

This is the Dockerfile we will be using to build the image.

GitHub Actions

GitHub Actions, like most other pipeline tools uses the yaml file format. GitHub Actions utilizes workflows.

A workflow is a configurable automated process made up of one or more jobs. You must create a YAML file to define your workflow configuration.

A workflow run is made up of one or more jobs. Jobs run in parallel by default.

A job contains a sequence of tasks called steps.

To create a workflow create a folder at the root of your project repo, .github. In that newly created folder create a folder, workflows. The names of these directories are important since GitHub specifically looks in .github/workflows/ to see if there are any workflows that should be executed.

We’ll start with the development branch, by creating a workflow and defining a name, Test. On a push to the branch the test job will get triggered. The job will run on Ubuntu. We then define a matrix of node versions to install and test the code. However, before getting to the actual testing we checkout the code. Then, we install the dependencies and test the code. We have Slack notifications set-up to alert us of any failure, or if everything succeeded.

Going a bit more in-depth, the Checkout and Use Node.js steps are predefined built-in actions. You can think of these like Math.pow() in JS. This is unlike the Install Dependencies step where we run our own command. The Slack notifications use voxmedia/github-action-slack-notify-build@v1. You can think of this as an external dependency like Express. By exploring the GitHub Marketplace you can find all kinds of actions to use in your workflows.

This Is Not Financial Advice GitHub Actions yaml file. Image my own.

For the production workflow, on line 6, we specify the workflow to trigger on push to the production branch. An additional job is created, build, which depends on test first completing. If we didn’t have this specification on line 44, the two jobs would run in parallel. Once again we checkout the code, however this time we use an action that someone else has predefined for us in order to build, push, and deploy to Heroku. Lastly, we have our Slack notifications.

This Is Not Financial Advice Github Actions yaml file for production workflow. Image my own.

What about environment variables?

Environment variables are an essential part to any web app. In your GitHub repo navigate to the Settings tab, then on the sidebar click Secrets. Near the top right New repository secret can be found. After entering a name and value you will be redirected to the secrets page with your new secret in the list. As you saw above in the code snippets we can access it by using the syntax ${{ secrets.NAME_OF_SECRET }} and the secret will be dynamically injected at runtime.

How Do We Setup Heroku?

Heroku is somewhat straightforward by looking at the secret keys. You will need your Heroku email, API key, and app name. I’ll assume you know your email. To get the app name you’ll first need to login and create an app. To get the API key go to your Account Settings by clicking the ninja profile icon in the top-right. Scroll down to the API Key section and select Reveal. Now, save this key in your secrets along with your email and app name.

Going back to the dashboard, click Newand in the dropdown click Create new app. Type in the app name, tinfa-gh-actions, and click Create app. Once the app has been created click on the app. Near the top, on the navigation bar click Settings, then scroll down to Config Vars and in that section click Reveal Config Vars. This is where we will enter any secret variables needed for production. We will add MONGODB_URI, as the key and the URI as the value.

And Slack Notifications?

The Slack notification took some effort to get setup. First, make sure you are signed in to Slack, then navigate to https://api.slack.com/ and click on Create Custom App. Next, click the green button with the text Create New App. When the popup appears, select From Scratch. I named the app TINFA-GH-Actions and selected the appropriate workspace.

Once the app is created, we need to set permissions to allow the Slackbot to send notifications. On the left side navbar in the Features section, click on OAuth & Permissions. Scroll down to Scopes and under Bot Token Scopes click Add an OAuth Scope. The three scopes we need are channels:read, chat:write, and groups:read.

If you would like to customize the appearance of your bot navigate to the Basic Information page using the sidebar. Scroll down to Display Information. Here you can change the app name, add a description, change the color, and add an app icon.

The next step for the bot is to install it to the workspace. Navigate to Install App. You will see your Bot User OAuth Token and underneath, a button Install to Workspace. Click that, and now, we just need to get the channel id. Open up the Slack desktop app and add the new app you just created. Once it is installed, right-click the channel, select Copy Link, and paste it somewhere. It could be the text box in that Slackbot channel. The string of alphanumeric characters after /archives/ is the channel id. Add it to your GitHub secrets so that it can be used in the notification step as ${{ secrets.SLACK_CHANNEL_ID }}. A success notification will look like the image below.

Give it a run and see what happens. You can check out the repo and see the deployed app.

CircleCI

Another CI/CD tool is CircleCI. Before proceeding, sign up for CircleCI and connect your GitHub account. If you run into trouble with this part, check out the Circle CI docs.

Much like GH Actions uses predefined actions which we can use, CircleCI uses Orbs. We will use the Node, Slack, and Heroku obs. You can find which Orbs are available and how to use them here.

Orbs are reusable snippets of code that help automate repeated processes, speed up project setup, and make it easy to integrate with third-party tools.

Before getting into our yml file we need to create the directory, .circleci. In the directory we can create config.yml. CircleCI specifically looks in .circleci directory for a file, config.yml.

At the top of our yml file we set the version to be 2.1. This is important since orbs, parameters, and executors all require version 2.1. Next, we define our executor. This just means the environment in which the steps of a job will be run. We can choose to run it on machine, macos, windows, or docker. We will choose docker.

After defining the orbs, we can move on to define the workflows. CircleCI is similar to GH Actions in that it has workflows which contain jobs which contain steps. The workflow runs on version 2 with the name test-build-and-deploy. We will have two jobs test and build-deploy. test sets a context, Slack, which will enable us to send notifications. Next, we filter the job so that it only runs on development and production branches. Then, we setup a matrix of different Node versions all on the Linux OS.

Diving into the test jobs we set the parameters. The executor will run on the matrix we defined in the workflow section. The steps consist of checking out the code, installing a specific Node version, installing dependencies, testing the code, and then sending a Slack notification on any failures.

Back in our workflows we can define our next job, build-deploy. It requires test to run first. We set the context to Slack and filter the branch to production only.

In the build-deploy job we initialize Docker since we will need it to build our image later. In the steps we checkout the code, setup Docker, install Heroku, build and push the image to Heroku, then release the image. Finally, we notify on success or failure.

CircleCI gives us a nice way of visualizing the flow of the pipeline as seen below. All of our tests run in parallel. After all the tests succeed, we then move on to build and deploy.

How do environment variables work in CircleCI?

After logging in and ensuring your project is selected we can click on Project Settings. On the left side we can click on Environment Variables. Once we add the environment variables, we can use them in the yml file by prefixing the key with PROJECT_. If I want to access my MONGODB_URI, we can do so with PROJECT_MONGODB_URI. We have to prefix it since these are different variables than those defined as context, or within a step or job.

Notice how the PROJECT_MONGODB_URI is nowhere to be found in the yml file. This is because it acts like an exported variable in the terminal. When running the tests or building the image, the code will automatically look for the variable since it is accessible to the whole project. This is in direct opposition to GH Actions which we have to explicit state which environment variables are available at a given step.

Notifications

Setting up a Slackbot will be similar to GH Actions. Please see that section for more details. The slight change we will make is in OAuth & Permissions. We will add OAuth Scopes for chat:write, chat:write.public, and files:write.

We can set a channel but by omitting it, the default channel will be the Slackbot App channel. All we need to do is add the API key, set it as a environment variable in the project, and define the API key in the yml file. A default success notification will look like the image below. You can read the Slack orb documentation for direction on defining custom notifications.

To setup on Heroku refer to the GitHub Actions section. Here are the links to the GitHub repo and the deployed app.

TravisCI

To get started I connected my GitHub account to TravisCI and gave it OAuth permission. TravisCI looks in the .travis.yml file for the build configuration. I would recommend checking out the documentation here for getting started and here for the CLI.

First we define the os: linux and the Linux (read Ubuntu) distribution focal, which is just the release name of Ubuntu 20.04. Next, we define the language as node as set a matrix of versions. We set docker as a service so that we can have access to it in the pipeline.

Notifications

Notifications with TravisCI will be different than how we configured it for GH Actions and CircleCI. Configuring Slack notifications takes some work but I was able to follow the documentation. The notifications won’t appear in a channel that is configure in the yml file like GH Actions or CircleCI. Instead, after signing in to Slack you will come to a page where you configure which channel to post to.

You will want to make sure you encrypt the account:token with the CLI command travis encrypt "<account>:<token>" --add notifications.slack -- pro. Notice how we didn’t specify a room/channel. This is because TravisCI will automatically send the notifications to you, if there is no room/channel specified. The --pro command is necessary to use travis-ci.com as the API endpoint. The default endpoint is travis-ci.org, which TravisCI has stated they are moving away from.

A success notification will look like the image below.

Build Stages

For the script we run npm test. The node version matrix we defined before will be used to run the script on each version. After the script, we define jobs which is our deploy stage. We choose to skip the script. This line is important since we don’t want to rerun any tests. The separation of the script and the jobs is what allows the tests to run four times (once for each node version) but only deploy once. This is what TravisCI calls “build stages”. You can read more about build stages in the documentation.

In the deploy job, we want to deploy to Heroku. We can use the CLI to encrypt the api_key with the command travis encrypt $(heroku auth:token) --add deploy.api_key --pro. We choose to only deploy on the production branch, yet the test matrix will run on all branches. Lastly, we skip any cleanup. This allows for a slightly optimized pipeline by preventing Travis from resetting the working directory.

What about our MONGODB_URI environment variable? In the Travis dashboard, clicking “More Options”>Settings takes us to a page where we can define our environment variables. Be sure to wrap the value in quotes if it contains any spaces or special characters. I forgot to wrap the MONGODB_URI in quotes, which made my tests fail. Trust me, add quotes, this one took way too long (weeks) to debug.

More on deploying to Heroku with TravisCI here.

One additional file we have to include with Travis is heroku.yml. This file declares how to build the docker container for deployment. We choose to build with Docker and define the web build Dockerfile to be in the same directory as heroku.yml. If the Dockerfile was in a sub-directory, we would define the path on line 3. Lastly, we set the config to contain any environment variables needed in production:

To setup on Heroku refer to the GitHub Actions section. Feel free to check out the GitHub repo and live site.

Comparing the Tools

Each tool presented its own challenge to setup and configure. I will say in terms of difficulty GH Actions was the easiest to configure while TravisCI was the hardest.

TravisCI was a bit difficult since it requires you to install Ruby, in order to install the TravisCI CLI, in order to encrypt certain environment variables. I appreciate CircleCI and GitHub Actions for taking care of all of this in the web app. Although I will say I do appreciate TravisCI’s succinct yml file as compared to the two separate yml files required by GH Actions.

CircleCI fell in the middle in terms of difficulty, succinctness, and ability to configure.

Overall, I see all three as great tools which can greatly improve the process of testing and deploying your application. With my knowledge now, I could probably setup and configure a pipeline in much less time.

Reflecting on Improvements

As I stated near the beginning of this article, I’d like to expand to include more pipelines in the not-so-distant-future. As far as improvements to the existing pipelines, I can think of a few. It could be good to build the Docker image first and then test the code in the container. This could eliminate any weird inconsistencies when going from a Node environment to a Node environment in a Docker container. Build > Test > Deploy > Release.

Caching would improve build times. Every time the pipeline runs it starts over. TravisCI has the skip_cleanup but even this is only a small improvement compared to what could be achieved by caching the build images. For example, in our Dockerfile which contains eight steps, we can assume that most of the time our base image and dependencies will not change often. By caching these layers we will be able to start from the sixth step.

Security is a big interest of mine. Testing the code for security flaws and furthermore testing the containers for security flaws, perhaps using Snyk, would be a great improvement. This would ensure that even if our code is working, we do not deploy any code or containers which could potentially compromise the users or any data.

One last improvement which is more of a challenge would be to deploy the containers on an IaaS instead of a PaaS service. I haven’t looked into this but I would assume we would have to write our own script to do this instead of replying on pre-built orbs/actions/etc.

You are not just a dreamer. I hope this article helps you break into CI/CD pipelines and gives you the head start to create your own unique pipelines for all your applications.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store