Google Cloud SQL — Tips & Tricks

Konstantin Tarkus
6 min readSep 5, 2020
Google Cloud SQL, PostgreSQL Edition

When creating a new Cloud SQL instance…

Ensure that the instance name is picked wisely. You may find it helpful if it remains short, and contains the Cloud SQL edition (MySQL, PostgreSQL, SQL Server) and version number. For example, pg12.

https://console.cloud.google.com/sql/

Down the road, when you’ll be upgrading it to the newer version, you would create a separate Cloud SQL instance (e.g. pg14) backup data from the previous database instance and restore it in the new one. Note that renaming a Cloud SQL instance (ID) is not possible, as well as changing its region. When you delete a Cloud SQL instance, you will not be able to create a new instance with the same name, at least not within the next couple of months.

Most likely you will be using a separate GCP project for each deployment environment ( prod, stading, test, dev). In which case, adding the environment (slug) into the instance name is not necessary.

Enable access from public IP addresses solely for development and administration convenience. Whitelist IP addresses (or, IP ranges) from which it would be allowed to access the database. You would also need to restrict access to the database without a valid SSL certificate.

If unsure, pick “micro” database instance size (shared vCPU, 0.6 GB RAM, $8/month), it should handle a small web application payload just fine unless something is misconfigured.

Once the database instance (VM) is created, go to its settings > Connections, disable unsecured connections, and download client and server SSL certificates to your web application project.

When creating a database…

During development, it’s totally fine to use a single instance of Cloud SQL, just create a separate database per deployment environment. Adding environment name (slug) to database names would help to ensure that whenever you administer it, it would be clear to which environment the opened database belongs, right in your face 😄

  • example — production
  • example_dev — development; shared development environment
  • example_test — test / QA
  • example_test_123 — review app; automatically created per PR
  • example_local — local instance of PostgreSQL database

Right before releasing your app to the public, you would just copy the production database to a separate Cloud SQL instance.

When creating these databases, ensure that the correct charset and collation are used.

Connecting to a Cloud SQL instance from Node.js

You may find it convenient to place database connection settings into .env files, this way you will be able to re-use them for different purposes — launching the app, migrating database, configuring CI/CD workflows, etc.

In the case of PostgreSQL, the environment variables would look like this:

# PostgreSQL
# https://www.postgresql.org/docs/current/static/libpq-envars.html
PGHOST=104.197.10.130
PGPORT=5432
PGUSER=postgres
PGPASSWORD=~E7Rf][k*P/$9.%F
PGDATABASE=example_dev
PGSSLMODE=verify-ca
PGSSLCERT=./ssl/client-cert.dev.pem
PGSSLKEY=./ssl/client-key.dev.pem
PGSSLROOTCERT=./ssl/server-ca.dev.pem
PGSERVERNAME=example-dev:pg12

These settings will be used for local development, and during deployment, you would inject slightly different settings:

PGHOST=/cloudsql/example-dev:us-central1:pg12
PGUSER=postgres
PGPASSWORD=~E7Rf][k*P/$9.%F
PGDATABASE=example_dev

It would ensure that the app efficiently connects to the Cloud SQL instance through a Unix socket available to Cloud Functions, Cloud Run, and App Engine apps.

For this to work, you just need to ensure that Cloud SQL Admin API is enabled in the project where your app is being hosted, and the service account under which the app is running (e.g. example@appspot.gserviceaccount.com) is granted the Cloud SQL Client (or, a more permissive) IAM role in the GCP project with the database.

Once these environment variables are loaded into the app (see dotenv , envalid), initializing a new database client can be done without explicitly passing any connection settings other than SSL and connection pool size.

Note that the servername option needs to match the server name listed on the SSL certificate. Otherwise, the connection would fail with the following error:

Error [ERR_TLS_CERT_ALTNAME_INVALID]: Hostname/IP does not match certificate's altnames: Host: localhost. is not cert's CN: example-dev:pg12

Do not disable certificate verification by setting rejectUnauthorized: false.

Switching between environments…

Ensure that you can easily specify the target database when launching the app locally as well as running any db administration scripts. For example:

$ yarn start                   # Connects to `example_dev` (default)
$ yarn start --env=local # Connects to the local database
$ yarn start --env=test # Connects to the test (QA) database
$ yarn db:migrate --env=local # Migrates local database
$ yarn db:migrate --env=test # Migrates test (QA) database

Storing db passwords…

While it’s totally fine to store the (shared) dev database password in the code repository, storing production secrets must be avoided. Below is a couple of lightweight approaches that you can use for that purpose.

(1) Add .env.*.override files to .gitignore and move PGPASSWORD env variables to the env files that are not being committed to the code repo.

.env.local           # Settings for local database
.env.dev # Settings for dev (shared) database
.env.test # Settings for test (QA) database
.env.prod # Settings for production database
.env.prod.override # Overrides for prod environment

Which would be loaded by using dotenv like this:

import dotenv from "dotenv";
import minimist from "minimist";
const { env } = minimist(process.argv.slice(2), {
default: { env: "dev" }
});
dotenv.config({ path: `.env.${env}.override` });
dotenv.config({ path: `.env.${env}` });

(2) Alternatively, store db passwords and other secrets encrypted and decrypt them at runtime by either using Google Cloud Secret Manager or a master passphrase from .env.override file that would be excluded from the repository via .gitignore.

Optionally, add a script for reading/writing secrets from/to .env files:

yarn env:set {name} {secret}     # Encrypt and save to .env.{env}
yarn env:get {name} # Read from .env.{env} and decrypt

This can be easily implemented by using the nativecrypto Node.js module.

Do not encrypt passwords for the “dev” environment. It will make it easier for everyone on your team to just clone the repo and launch the project without a need to configure or install anything (other than Node.js).

Using Cloud SQL Proxy…

It’s also possible to use Google Cloud SQL proxy when connecting to a Cloud SQL server from the local machine (or, CI/CD server). This way, you will not need to worry about tweaking SSL and Cloud SQL firewall settings. But, I personally find it less convenient.

Using PostgreSQL CLI tools…

If you’re planning to use a hosted version of SQL database most of the time, it’s possible to install just the CLI tools without installing the whole PostgreSQL server on your local machine. On macOS, psql, pg_dump, pg_restore, and other CLI tools can be installed using Homebrew:

$ brew install libpq

Creating backups…

By default, Google Cloud SQL will write backups to a Google Storage bucket daily, you would just need to configure the backup window when these operations will be taking place.

In addition to that, you need to ensure that a fresh backup of the production database is created as part of the CI/CD deployment workflow.

The example source code is available on GitHub, see Node.js API Starter Kit.
BTW, if you have a solid startup idea and would like to join our efforts in bringing it to life, please don’t hesitate to get in touch on Discord or Telegram, I’ll be happy to discuss it an see how we could collaborate.

See you around!

--

--

Konstantin Tarkus

Empowering startups with cutting-edge expertise: software architecture, optimal practices, database design, web infrastructure, and DevOps mastery.