A Complete Guide to Deploying a Containerized Application Using Managed Instance Groups (MIGs) in Google Cloud (GCP) with Continuous Integration (CICD) — Part 3

Paulo Carvalho
Mar 5 · 7 min read
Photo by Mike Gieson from FreeImages

In the third part of this guide, we will walk through setting up a CICD solution for Docker containerized application such as a Ruby on Rails backend.

Scenario

We want to deploy a web application consisting of one or more frontends using a client side framework such as ReactJS which connects to one (or more) backends that have been containerized with Docker. Our backend will connect to a CloudSQL instance for storage and all environment variables will be encrypted with GCP’s Key Management Service (KMS). GIT commits to a specific branch on GitHub will trigger a build and deploy of the application. A load balancer will direct traffic and serve as a proxy for HTTPS using a managed certificate.

Pre-requisites

Setup the environment proposed in part 1 of this guide or equivalent. Basic understanding of Docker and CloudBuild triggers as shown in part 2.

Step 1: Dockerize the App

Below, we have a Dockerfile (placed in the root of the repository) that creates docker images containing our application’s source code. Note that this is a multi-stage build with 2 distinct end images. We will use 1 image for testing and the other for deployment since each case has different dependencies (this may not be true for your application).

Note: We define an ENTRYPOINT for our production image which performs a database migration prior to starting our web server. This is one of many ways to perform migrations and strategies for rolling back need to be considered when choosing how to migrate.

#############################
# STAGE 1: Installer build #
#############################
FROM ruby:2.5-alpine AS installer# Expose port
EXPOSE 3000
# Set desired port
ENV PORT 3000
# Set the app directory var
ENV APP_HOME /app
RUN mkdir -p ${APP_HOME}
WORKDIR ${APP_HOME}
# Install necessary packages
RUN apk add --update --no-cache \
build-base curl less libressl-dev zlib-dev \
mariadb-dev tzdata imagemagick libxslt-dev \
bash nodejs
# Copy gemfiles to be able to bundle install
COPY Gemfile* ./
# Install all common gems.
RUN bundle install --deployment --jobs $(nproc) --without development test
#############################
# STAGE 2: Test build #
#############################
FROM installer AS test-image# Set environment
ENV RAILS_ENV test
# Copy installed gems from installer image
COPY --from=installer /app/vendor/bundle /app/vendor/bundle
# Install gems to /bundle (Note that here we include the test group)
RUN bundle install --deployment --jobs $(nproc) --with test --without development
# Add app files
ADD . .
# Install the MySQL client that we will use to ping the DB
RUN apk add --update --no-cache mariadb-client
#############################
# STAGE 3: Production build #
#############################
FROM installer AS production-image# Set environment
ENV RAILS_ENV production
# Copy installed gems from installer image
COPY --from=installer /app/vendor/bundle /app/vendor/bundle
# Add app files
ADD . .
# Precompile assets
RUN DB_ADAPTER=nulldb bundle exec rake assets:precompile assets:clean
# db:migrate and Puma start command
ENTRYPOINT bundle exec rails db:migrate && bundle exec puma -C config/puma.rb

Step 2: Create Docker-Compose for Testing

Docker Compose is a useful tool for running multiple containers and allowing them to interface with each other. We use this capability to run our integration tests (in the example below we only connect to a DB but options are limitless).

The Compose below spins up a MySQL database container called db with a default database my-db and no root password (please don’t do this with a user facing DB!). It then creates a second container called app running a test version of our application. Note the ./scripts/wait-for-mysql.sh (shown after the docker-compose file) preceding the test command. This is required (as discussed here) in order to ensure that the database is ready to accept connections prior to connecting to it.

version: '3.4'services:
app:
image: gcr.io/${PROJECT_ID}/my-test-image:${SHORT_SHA}
command: ["./scripts/wait-for-mysql.sh", "&&", "bundle", "exec", "rails", "t"]
ports:
- 3000:3000
depends_on:
- db
environment:
- TEST_DATABASE_URL=mysql2://root@db/my-db
- RAILS_ENV=test

db:
image: mysql:5.7.24
ports:
- 3306:3306
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_DATABASE=my-db
networks:
default:
external:
name: cloudbuild

The wait-for-mysql.sh script is shown below and is based on the Compose documentation script.

#!/bin/sh# wait-for-mysql.sh
set -e
host="$1"
shift
cmd="$@"
until mysql -h "db" -u "root" --connect_timeout 1 -e "\q"; do
>&2 echo "MySQL is unavailable - sleeping for 1 second"
sleep 1
done
>&2 echo "MySQL is up - executing command"
exec $cmd

Step 3: Create CloudBuild File for Testing

Note: For more information on CloudBuild see part 2 of the guide.

The deployment pipeline described in this part of the guide will require two build files. One to create and deploy a production grade image and the other to create a test image and run tests on it. The latter is shown in this step.

We create a file named cloudbuild-tests.yml(you can rename it as desired) as shown below. The build it describes consists of two steps:

  1. Build the Docker image using Kaniko. We use Kaniko in order to cache the Docker build steps which speeds up future runs of the build. The image is automatically pushed to the specified destination when build is complete along with all cache artifacts.
  2. We run a docker-compose which is responsible for running our tests. The exit code from our docker-compose is used to indicate if tests passed or failed.
steps:  # Build TEST image
- name: gcr.io/kaniko-project/executor:v0.17.1
id: buildtest
args:
- --destination=gcr.io/$PROJECT_ID/my-test-image:$BRANCH_NAME
- --destination=gcr.io/$PROJECT_ID/my-test-image:$SHORT_SHA
- --cache=true
- --cache-ttl=72h
- --target=test-image
# Run Tests
- name: 'docker/compose:1.25.3'
id: test
args:
- -f
- docker-compose.cicd.yml
- up
- --exit-code-from
- app
env:
- 'PROJECT_ID=$PROJECT_ID'
- 'SHORT_SHA=$SHORT_SHA'
waitFor:
- buildtest
timeout: 15m

Step 4: Create CloudBuild Trigger for Testing

Whenever a developer pushes to a branch we want our tests to automatically run. To this end, we create a trigger such as the one below (for more information on creating a trigger see part 2):

The trigger above will run whenever there is a push to any branch except staging or production. We will have a separate script (or multiple separate scripts) that will run for those branches.

Step 5: Create CloudBuild File for Deployment

The CloudBuild script below consists of 5 steps:

  1. We build the production image and push it to our container registry.
  2. We decrypt our secrets.
  3. We create a new instance template using our previously built image and decrypted secrets. Note: This manner of secret management may not be appropriate for your organization. Adjust accordingly.
  4. If your organization does deploys often, you may need to ensure that the cluster is stable (the last update has already been applied) before attempting to perform a new update. Else, the new update will fail. This step waits for the cluster to be stable.
  5. We update our managed instance group with our new instance template. Note the --max-unavailable flag. This is recommended if you have a small clusters (less servers than number of availability zones in your region) in order to not have disruptions during the update.

Note: We increased the timeout (from the default 10 minutes) to accommodate the time required to build our image if cache has expired. Adjust accordingly for your application.

steps:  # Build PRODUCTION image
- name: gcr.io/kaniko-project/executor:v0.17.1
id: build
args:
- --destination=gcr.io/$PROJECT_ID/my-deploy-image:$BRANCH_NAME
- --destination=gcr.io/$PROJECT_ID/my-deploy-image:$SHORT_SHA
- --cache=true
- --cache-ttl=168h
- --target=production-image
# Decrypt secrets
- name: gcr.io/cloud-builders/gcloud
args:
- kms
- decrypt
- --ciphertext-file=deploy/environment/secrets.$_ENV.enc
- --plaintext-file=secrets.dec
- --location=global
- --keyring=my-keyring
- --key=my-key
# Create the new instance template
- name: gcr.io/cloud-builders/gcloud
id: create-instance-template
args:
- compute
- instance-templates
- create-with-container
- my-template-$_ENV-$SHORT_SHA
- --custom-cpu=1
- --custom-memory=2GB
- --boot-disk-size=20GB
- --container-env-file=secrets.dec
- --region=southamerica-east1
- --subnet=my-subnet-$_ENV
- --tags=allow-hc-and-proxy,allow-ssh
- --container-image
- gcr.io/$PROJECT_ID/my-deploy-image:$SHORT_SHA
# Make sure that our MIG is stable before update
- name: gcr.io/cloud-builders/gcloud
id: wait-until-stable
args:
- compute
- instance-groups
- managed
- wait-until
- my-mig
- --stable
- --region=southamerica-east1
# Update the managed instance group
- name: gcr.io/cloud-builders/gcloud
id: update-instance-group
args:
- compute
- instance-groups
- managed
- rolling-action
- start-update
- my-mig
- --version
- template=my-template-$_ENV-$SHORT_SHA
- --region=southamerica-east1
- --max-unavailable=0
timeout: 20m

Step 6: Create CloudBuild Trigger for Deployment

We want to configure a trigger to automatically deploy our application to the staging environment whenever code is merged into the staging branch. The trigger is shown below:

Note: A similar trigger can be created for production. However, I would recommend placing production on a separate project entirely (The $PROJECT_ID in our scripts will handle that as long as the trigger is created on the separate project as well). With separate projects better separation between staging and production is possible. It will also simplify the permission management for developers that have access to either system.

Conclusion

At this point, we have all the infrastructure setup (part 1) and are able to deploy frontend code automatically to our Cloud Storage bucket(s) (part 2). Now we are also able to perform testing and deployment of our backends.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade