Continuous Integration, Continuous Deployment Rails App with Bitbucket Pipeline + Capistrano

Image credits Unsplash

As far as I know Continuous Integration (CI) is an essential part to any modern software development process where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. Gone are the days of monolithic releases with massive changes, today it’s all about releasing fast and often. Most software teams have come to rely on some sort of automated CI system.

The key goals of continuous integration are to find and address bugs quicker, improve software quality, and reduce the time it take to validate and release new software updates.

Further that Continuous Integration is very frequently accompanied by Continuous Delivery/Deployment (CD) and very often when people talk about CI they refer to both.

CI relies on two main principles:

  1. Changes merged to main source branch as often reasonably possible. The tasks are explicitly split up in such a way so that to avoid creating massive change sets.
  2. Each change is fully tested. Automated testing is the foundation of CI. In a team environment, and even on a personal project, it’s nearly impossible to insure that latest changes don’t break existing code without tests. And every time a change set is merged to master, CI run entire suite to guarantee nothing was impacted negatively.

Nowadays, big software companies distributed their CI system as software services, hosted CI providers such as Google Cloud CI, Travis CI, Circle CI, AWS CI/CD and now Atlassian with Bitbucket Cloud Pipeline.

So, it’s easy to lead Ops team hard to choose CI stack suite to their project. I’m also take a headache selection CI/CD process and automation tools for our Ruby on Rails project.

This paper cover my first experience of CI/CD process with Bitbucket Pipeline + Capistrano for Rails project.

Prerequisites

  • My target is setting up a CI/CD process that satisfy following attributes: pretty simple, allow to automate almost build and deploy tasks, suite to small project and startup team (1–5 members).
  • Need to host your repository on Bitbucket.
  • Assume to you already installed Capistrano for rails follow these guides.
  • Ruby on Rails application should be able to deploy to the Internet.
  • In this paper, I’m using three virtual servers for development, stage and production phases.

Bitbucket Pipeline

If you have touched on the pipeline workflows like Jenkins, Travis and Circle, but having similar capabilities built into Bitbucket is quite a big deal. Jenkins is a dedicated CI/CD tools, and therefore require special management on a separate tool. And Travis and Circle are usually go with Github projects.

Another side, because Bitbucket is free private hosted repositories, so having a good build system that lives in the same place as your repositories on the other hand is far more convenient and will hopefully make more people willing to use it.

Enable Bitbucket Pipeline

Firstly, you should can enable pipeline service in your repository navigation.

Image credits Bitbucket Cloud Documentation.

But don’t pick a template because you can create the bitbucket-pipelines.yml configuration file easily from local repository with your own workflow steps after that just push to the remote Bitbucket repository.

The yaml file needs to be present in the root of the project (just like with every other CI/CD tool) and its structure is probably familiar as well. Pipelines is Docker based so you start with defining an image. Or not, as it’s optional. If you don’t define an image it will use a default Bitbucket Docker container that is based on the Ubuntu 14.04 image with applications available out-of-the-box.

Initial docker image configuration for building Rails project

You should choose an official docker images for Ruby app environment and select Ruby version appropriately from Docker Hub.

My project is running on Rails 5 with Ruby 2.4.4 so bitbucket-pipelines.yml following:

image: ruby:2.4.4
pipelines:
default:
- step:
script:
- apt-get update -y
- apt-get install -y build-essential git-core curl nodejs libmysqlclient-dev ssh
- gem install bundler
- bundle install
- mv config/secrets.ci.yml config/secrets.yml
- mv config/database.ci.yml config/database.yml

So, from above configs I just do the following things:

  • Update system and software.
  • Install required libraries.
  • Install bundler gem
  • Install gem dependencies
  • Move file config with test values.

Use services and database config

Usually in Rails project, we need set up a database that store test data and required before we run specs. I’m using a database service that uses the official Docker Hub MySQL image.

Note — the environment section must contain only literal values.

image: ruby:2.4.4
pipelines:
default:
- step:
script:
- apt-get update -y
- apt-get install -y build-essential git-core curl nodejs libmysqlclient-dev ssh
- gem install bundler
- bundle install
- mv config/secrets.ci.yml config/secrets.yml
- mv config/database.ci.yml config/database.yml
services:
- mysql

definitions:
services:
mysql:
image: mysql
environment:
MYSQL_DATABASE: 'db_test'
MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
MYSQL_USER: 'test'
MYSQL_PASSWORD: 'password'

Divide building branches in Bitbucket Pipelines

You can build branches in Bitbucket Pipelines by adding branch-specific configuration in your bitbucket-pipelines.yml file.

As I said, I’m building three environments for development, stage and production. There are two ways here: you can have a default section which everything defaults to, and you can have a branches section where you define specific branches and what needs to happen there. The bitbucket-pipelines.yml file:

image: ruby:2.4.4 # This is default image
pipelines:
default:
- step:
script:
- echo "This script runs on all branches that don't have any specific pipeline assigned in 'branches'."
branches:
master:
- step:
script:
- echo "This script runs only on commit to the master branch. That will be used for Stage."
feature/*:
- step:
image: ruby:2.5.1 # I use updated Docker Ruby image
script:
- echo "This script runs only on commit to branches with names that match the feature/* pattern. That will be used for Development phase."

Run test for each branches

After preparing basic requirements, I will run test cases or specs for each branches following:

image: ruby:2.4.4 # This is default image
pipelines:
default:
- step:
script:
- apt-get update -y
- apt-get install -y build-essential git-core libyaml-dev nodejs libmysqlclient-dev ssh
- gem install bundler
- bundle install
- mv config/secrets.ci.yml config/secrets.yml
- mv config/database.ci.yml config/database.yml
- bundle exec rake db:migrate RAILS_ENV=test
- bundle exec rspec spec/models/

services:
- mysql
branches:
master:
- step:
script:
- apt-get update
- apt-get install -y build-essential git-core libyaml-dev nodejs libmysqlclient-dev ssh
- bundle install
- mv config/secrets.ci.yml config/secrets.yml
- mv config/database.ci.yml config/database.yml
- bundle exec rake db:migrate RAILS_ENV=test
- bundle exec rspec spec/
- bundle exec rails test/api test/integration

services:
- mysql
develop:
- step:
image: ruby:2.5.1
script:
- apt-get update
- apt-get install -y build-essential git-core libyaml-dev nodejs libmysqlclient-dev ssh
- gem install bundler
- bundle install
- mv config/secrets.ci.yml config/secrets.yml
- mv config/database.ci.yml config/database.yml
- bundle exec rake db:migrate RAILS_ENV=test
- bundle exec rspec spec/models/
- bundle exec rspec spec/controllers/
- bundle exec rspec spec/apis/
- bundle exec rails test/models test/controllers
test/helpers
services:
- mysql
definitions:
services:
mysql:
image: mysql
environment:
MYSQL_DATABASE: 'db_test'
MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
MYSQL_USER: 'test'
MYSQL_PASSWORD: 'password'

As you see my sample config scripts fully above, I’m building test scenarios for each environment whenever developers make changes on the branch then Bitbucket pipeline will pick up test scripts suite to it.

Have some limitations on Pipeline workflow:

Defining the same thing in multiple places will eventually almost always end up causing issues, so I’d really like it if there was some sort of inheritance structure possible. Having a specific and reusable step that includes all the requirements to do this instead allows for far greater flexibility. — No inheritance tasks.

Everything turns into one long list, without a way to see why something is done. Splitting a list up in sections like prepare, build, and deploy will make this far more legible and therefore more maintainable. I suspect this is something they’ll be looking at regardless of my feature request. — Just have one step per session.

Make A Smooth Pipeline

At the beginning, I mentioned that when people talk about CI they refer to CD too. So, after my test scenarios on the branch passed, I want to deploy changes to development server automatically.

It’s just simple by using Capistrano commands in bitbucket-pipelines.yml. For example, develop branch section:

develop:
- step:
image: ruby:2.5.1
script:
- apt-get update
- apt-get install -y build-essential git-core libyaml-dev nodejs libmysqlclient-dev ssh
- gem install bundler
- bundle install
- mv config/secrets.ci.yml config/secrets.yml
- mv config/database.ci.yml config/database.yml
- bundle exec rake db:migrate RAILS_ENV=test
- bundle exec rspec spec/
- bundle exec rails test/controllers test/helpers
- bundle exec cap development deploy
services:
- mysql

Remember when you use Bitbucket Pipeline server perform Capistrano deployment, you need put Bitbucket Pipeline ssh public key on the remote server where you want to deploy to. SSH public key of Pipeline can get following:

  1. In the repository Settings, go to SSH keys under 'Pipelines'.
  2. Click Generate keys to create a new SSH key pair.

More Details on this guide.

With Capistrano, I can deploy to remote virtual servers for development or stage. As if I need release to Production after the Stage is over passes, I can update to the pipeline yml file or just do it by Capistrano command on master branch.


Conclusion

So, that’s all. My Rails CI/CD solution is used for purposes mentioned. Despite the limitations of Bitbucket Pipeline, Pipelines is a great start.

There are certainly some things that I believe can be improved, in fact, if you want to build your own Docker containers to run the builds just follow these. And almost there isn’t really anything you can’t do. You can include the dependencies in there, and even call some prepared bash files. Also,your bitbucket-pipelines.yml configuration is validated by this tool.

I am looking forward to seeing new Bitbucket Pipeline features.