Deploying a Rails app to GCP Cloud Run

Gavin Carew
9 min readApr 17, 2023

Hopefully, you’ve worked through containerizing a Rails app before getting here. If not, check out part two. Here’s where we put that practice into action, we’re going to deploy to Google Cloud Platform (GCP). Specifically, we’re going to deploy to Cloud Run.

This guide borrows heavily from Arnaud Lachaum’s blog on deploying to Cloud Run. He also wrote one of the best guides to using GraphQL with Rails that I’ve come across.

Why Cloud Run?

Truthfully, it’s because it’s a very cheap place to host an app. Cloud Run will allow your app to scale to zero, meaning it’s free if it’s unused. It’s meant for microservices — small, self-contained applications with a specific purpose. But the fact that it can run containers means we can really deploy whatever we want to Cloud Run.

Setup

Download and install the Google Cloud CLI. Start it up by running gcloud init. Log in to the CLI with gcloud auth login.

Log in to the Google Cloud (web) Console in a web browser and create a new project. Set your new project as the default project with gcloud config set project $MY_PROJECT_ID. You can either find your project ID in the cloud console, or you can list it in your CLI with gcloud projects list. It should be closely related to the name of your project.

You can do almost everything we’re going to cover here either in the cloud console or with the gcloud CLI. I’m just going to show you how I do things, but this is not the only way.

I’m using Viewing Party again (see part 2). I recommend switching to a new branch without the previous docker stuff so you don’t get confused about what goes where.

Setting up the database

Since cloud run is a serverless environment, you can’t deploy a database container to Cloud Run as we did when we were getting our development environment up and running in Part 2, since the database won’t persist. So the first step is to set up a Cloud SQL instance that our app can connect to.

An instance is kind of like the container we ran our database on in Part 2; one instance can have many databases. From the Cloud Console, navigate to the Cloud SQL section and click on ‘Create Instance’, then choose Postgres. In the instance configuration, you can customize the amount of space and the availability you think you’ll need. Since this is for a demo, I’m using a shared core instance and the lowest storage space (10 GB).

It can take a while for an instance to be created. While you’re waiting, jump ahead to getting secrets set up, and we’ll come back to networking the database.

Managing secrets

Remember how we set environmental variables in Part 2? Unfortunately, that strategy won’t work here. Since instances of our app are not persistent on Cloud Run, we need a way to store things like API keys in the cloud so that they will be accessible when building a container from an image.

You can use Google’s key management system (KMS) directly, but a really good option is Berglas. It’s a wrapper that creates a cloud storage bucket that stores keys encrypted by the KMS, and is overall easier to use. I’ll go through the setup below, but the full docs can be found here.

Install Berglas through the command line with arch -x86_64 brew install berglas. The rest of the setup will be done through the command line:

# Export your project ID as an environment variable. 
# The rest of this setup guide assumes this environment variable is set:
export PROJECT_ID=$MY_PROJECT_ID

# Export your desired Cloud Storage bucket name. This does not exist yet
export BUCKET_ID=$MY_SECRETS_BUCKET

# Enable the required services
gcloud services enable --project ${PROJECT_ID} \
cloudkms.googleapis.com \
storage-api.googleapis.com \
storage-component.googleapis.com

# Bootstrap a Berglas environment.
# This will create a new Cloud Storage bucket for storing secrets and a Cloud KMS key for encrypting data.
berglas bootstrap --project $PROJECT_ID --bucket $BUCKET_ID

# Export key management system key
export KMS_KEY=projects/${PROJECT_ID}/locations/global/keyRings/berglas/cryptoKeys/berglas-key

# Create a secret. Replace [MY_API_KEY] with your actual API key.
# You can also, of course, change the path from movies-api-key to whatever you need
# You can create multiple secrets following this pattern.
berglas create ${BUCKET_ID}/movies-api-key "[MY_API_KEY]" \
--key ${KMS_KEY}

# You will need a master key and secret_key_base to deploy to production
# If you don't have one, generate it with
# EDITOR="code --wait" bin/rails credentials:edit
# Then save and exit that file to save your secret key base.

# Pass the master key to Berglas as a secret.
berglas create ${BUCKET_ID}/master.key $(cat config/master.key) \
--key ${KMS_KEY}

# Create a service account for the Cloud Run service
gcloud iam service-accounts create "cloudrun-vp-demo" \
--project ${PROJECT_ID}

# Export the email associated with the service account
export SA_EMAIL=cloudrun-vp-demo@${PROJECT_ID}.iam.gserviceaccount.com

#Grant the service account access to the secrets
berglas grant ${BUCKET_ID}/movies-api-key --member serviceAccount:${SA_EMAIL}
berglas grant ${BUCKET_ID}/master.key --member serviceAccount:${SA_EMAIL}

Now that Berglas is set up, creating secrets is as easy as using berglas create and berglas grant to grant access to them.

Networking to your database

Go back to your cloud SQL instance in the cloud console. When it’s available, under the ‘databases’ section, create a database. Then, go to the users section and create a user with the built-in authentication.

Next, update the information in your config/database.yml file to point to the new database:

# config/database.yml

default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 10 } %>
host: localhost

development:
<<: *default
database: viewing_party_lite_development

test:
<<: *default
database: viewing_party_lite_test

production:
<<: *default
database: my_db_production
username: <%= ENV.fetch("PROD_DB_USERNAME") %>
password: <%= ENV.fetch("PROD_DB_PASSWORD") %>
host: /cloudsql/[MY_INSTANCE_CONNECTION]
# For example, my host looks like this:
# /cloudsql/vp-demo-383902:us-west1:vp-sql-instance

You should only have to change the production environment here (I left the rest as previous defaults). For the database name, enter the name of the database you created above. You can find the Cloud SQL instance connection by navigating to your instance in the cloud console — it will be in the ‘Connect to this instance’ section.

You’ll notice we also created two more environment variables for the username and password we created earlier. Let’s add them to our secrets bucket through the command line:

berglas create ${BUCKET_ID}/prod-db-username "[MY_DB_USERNAME]" \
--key ${KMS_KEY}

berglas create ${BUCKET_ID}/prod-db-password "[MY_DB_PASSWORD]" \
--key ${KMS_KEY}

berglas grant ${BUCKET_ID}/prod-db-username --member serviceAccount:${SA_EMAIL}

berglas grant ${BUCKET_ID}/prod-db-password --member serviceAccount:${SA_EMAIL}

Creating the Dockerfile

Much like our previous docker deployment, we need a Dockerfile. If you don’t have one, create it with touch Dockerfile. Add the following to the Dockerfile, making replacements where specified:

FROM ruby:2.7.4

# Get berglas
COPY --from=gcr.io/berglas/berglas:latest /bin/berglas /bin/berglas

# Install bundler
RUN gem update --system
RUN gem install bundler

# Install production dependencies.
WORKDIR /usr/src/app
COPY Gemfile Gemfile.lock ./
ENV BUNDLE_FROZEN=true
RUN bundle install

# Copy local code to the container image.
COPY . ./

# Environment
ENV RAILS_ENV production
ENV RAILS_MAX_THREADS 60
ENV RAILS_LOG_TO_STDOUT true

# These are links to your secrets. Sub these with your own links.
# If you forget where you put your keys, from your console go to
# Cloud Storage >> Buckets
ENV MOVIES_API_KEY_LINK [MY_BUCKET]/movies-api-key
ENV RAILS_MASTER_KEY_LINK [MY_BUCKET]/master.key
ENV DB_USERNAME_LINK [MY_BUCKET]/prod-db-username
ENV DB_PW_LINK [MY_BUCKET]/prod-db-password

# Run the web service on container startup.
CMD ["bash", "entrypoint.sh"]

If you haven’t added a .dockerignore file, add that as well and add your .env file to it.

Creating the entrypoint

Like we did for the Dockerfile, create an entrypoint with touch entrypoint.sh and add the following:

# !/bin/bash
set -e

# Environment setup
# This is where you define the environmental variables you will access from your app
export MOVIES_API_KEY=$(berglas access $MOVIES_API_KEY_LINK)
export RAILS_MASTER_KEY=$(berglas access $RAILS_MASTER_KEY_LINK)
export PROD_DB_USERNAME=$(berglas access $DB_USERNAME_LINK)
export PROD_DB_PASSWORD=$(berglas access $DB_PW_LINK)

# Run deploy tasks in warmup mode.
# These will be passed as environmental variables in the build step
if [ "$WARMUP_DEPLOY" == "true" ]; then
# The traditional Rails migration.
# As you deploy new versions, this will update the DB.
echo "Warmup deploy: running migrations..."
bundle exec rake db:migrate
echo "Warmup deploy: migrations done"
fi

# Precompile assets (can be skipped for an API)
RAILS_ENV=production bundle exec rails assets:precompile

# Start Puma
bundle exec puma -p 8080

Why the warmup deploy? We want to be able to pass an environment variable that will run migrations if we are submitting a new build (in case we made changes to our database that need migrating) but don’t necessarily want that to run when deploying additional containers from our image. It also means that we don’t need to grant Cloud Build access to Cloud SQL.

The first deploy

The first time we build and deploy is going to be through the command line. There are a number of config flags that only need to be passed once, and debugging is easier when building and deploying from the command line. We can automate these steps later.

First, we’ll build the image. For the Cloud Run service name, you can really choose anything:

export _SERVICE_ID=cloud-run-vp-demo

gcloud builds submit --tag gcr.io/$PROJECT_ID/cloudrun/$_SERVICE_ID

# If you are prompted to allow the cloud build API access to your project, do so.

Once that has been successfully built, here is the deployment step:

gcloud run deploy $_SERVICE_ID \
--image gcr.io/$PROJECT_ID/cloudrun/$_SERVICE_ID \
--platform managed \
--region us-west1 \
--service-account $SA_EMAIL \
--add-cloudsql-instances [MY_INSTANCE_CONNECTION] \
--set-env-vars WARMUP_DEPLOY=true

We are deploying our Cloud Run service from the image we created in the build step. Here’s a breakdown of what is happening in each line of this command:

  • Use the specified service we created
  • Build a container from the image we submitted
  • Let Google handle autoscaling and traffic management
  • Specify the region — it’s easiest to keep everything in the same region
  • Set the service account associated with this service to the one we set up when creating our Berglas secrets.
  • Add the Cloud SQL instance to the Cloud Run service
  • We set the environment variable WARMUP_DEPLOY to true so that migrations will run.

Click the link in your terminal that says ‘Service URL.’ Your app should be online! If something went wrong, check your logs and debug from there. If it’s horribly wrong, reach out to me on LinkedIn or Slack for help.

Automating future deployment with Cloud Build

Cloud Build is the Google version of Docker Compose from when we were deploying locally. Functionally, it’s a sequence of steps to build and deploy your app. To make it work, create a file in the root directory of your app with touch cloudbuild.yaml. Add the following to that file:

timeout: 1200s
steps:
# Build the application
- name: gcr.io/cloud-builders/gcloud
args: ['builds', 'submit', '--tag', 'gcr.io/$PROJECT_ID/cloudrun/$_SERVICE_ID']

# Deploy the warmup version
- name: gcr.io/cloud-builders/gcloud
args:
- run
- deploy
- $_SERVICE_ID
- '--image=gcr.io/$PROJECT_ID/cloudrun/$_SERVICE_ID'
- '--platform=managed'
- '--region=us-west1'
- '--update-env-vars=WARMUP_DEPLOY=true'

# Deploy the mainstream version
- name: gcr.io/cloud-builders/gcloud
args:
- run
- deploy
- $_SERVICE_ID
- '--image=gcr.io/$PROJECT_ID/cloudrun/$_SERVICE_ID'
- '--platform=managed'
- '--region=us-west1'
- '--update-env-vars=WARMUP_DEPLOY=false'

You can use substitutions at the time you submit this to Cloud Build, but I see no reason not to just hardcode your project ID and service ID in this file. So you’ll have to replace $PROJECT_ID and $_SERVICE_ID with your actual project and service ID’s in this file.

Notice how this file essentially recreates the build and deploy steps from above, just in a YAML format.

Cloud Build will need Cloud Run Admin and Service Account User roles enabled in order to perform these steps. Go to your Cloud Build settings and enable these two roles.

Finally, submit your build:

gcloud builds submit --config cloudbuild.yaml

This should build and deploy your app and is the only step you need to run in future deployments. If you get stuck, you can find my branch with the Dockerfile, entrypoint.sh, and cloudbuild.yaml here.

Conclusion

I hope that once you get an app deployed to Cloud Run, you’ll begin to understand the concepts behind deploying apps to cloud environments. From there, it’s all just details. AWS and Azure both have analogous cloud products, and the same patterns can be used for deploying across all three providers.

If there is anything I missed or any other guides you want to see, please let me know!

Big thanks to Arnaud Lachaume at Keypup for the best guide to Cloud Run I’ve seen so far.

Also thank you to Justin Domingus, who worked with me for hours to troubleshoot Cloud Build issues.

If this seems overwhelming, try going back to Part 2: Dockerizing a Rails App

--

--