Cloud SQL(MySQL, PostgreSQL and Microsoft SQL Server) automatically backup and stores into the GCS bucket
Step -1
Enable Cloud SQL Admin API
Step-2 Create Cloud SQL Instances w.r.t database type
- MySQL
- PostgreSQL
- MS SQL
You can create in GUI
gcloud CMD to create #MySQL
gcloud sql instances create myinstance \
--database-version=MYSQL_8_0 \
--cpu=2 \
--memory=7680MB \
--region=us-central1
#PostgreSQL
gcloud sql instances create myinstance \
--database-version=POSTGRES_14 \
--tier=db-perf-optimized-N-4 \
--edition=ENTERPRISE_PLUS \
--region=us-central1
#MS SQL
gcloud sql instances create myinstance \
--database-version=SQLSERVER_2017_STANDARD \
--region=us-central1 \
--cpu=2 \
--memory=7680MB \
--root-password=EXAMPLE-PASSWORD
Terraform code to create Cloud SQL My SQL
resource "google_sql_database_instance" "mysql_pvp_instance_name" {
name = "mysql-pvp-instance-name"
region = "asia-northeast1"
database_version = "MYSQL_8_0"
root_password = "abcABC123!"
settings {
tier = "db-f1-micro"
password_validation_policy {
min_length = 6
complexity = "COMPLEXITY_DEFAULT"
reuse_interval = 2
disallow_username_substring = true
enable_password_policy = true
}
}
# set `deletion_protection` to true, will ensure that one cannot accidentally delete this instance by
# use of Terraform whereas `deletion_protection_enabled` flag protects this instance at the GCP level.
deletion_protection = false
}
Set the default Google Cloud project where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Prepare the directory
Each Terraform configuration file must have its own directory (also called a root module).
- In Cloud Shell, create a directory and a new file within that directory. The filename must have the
.tf
extension—for examplemain.tf
. In this tutorial, the file is referred to asmain.tf
.
mkdir DIRECTORY && cd DIRECTORY && touch main.tf
- If you are following a tutorial, you can copy the sample code in each section or step.
- Copy the sample code into the newly created
main.tf
. - Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.
- Review and modify the sample parameters to apply to your environment.
- Save your changes.
- Initialize Terraform. You only need to do this once per directory
terraform init
Optionally, to use the latest Google provider version, include the -upgrade
option:
terraform init -upgrade
Apply the changes
- Review the configuration and verify that the resources that Terraform is going to create or update match your expectations:
terraform plan
- Make corrections to the configuration as necessary.
- Apply the Terraform configuration by running the following command and entering
yes
at the prompt
terraform apply
Delete the changes
To delete your changes, do the following:
- To disable deletion protection, in your Terraform configuration file set the
deletion_protection
argument tofalse
.
deletion_protection = "false"
Apply the updated Terraform configuration by running the following command and entering yes
at the prompt:
terraform apply
Similarly, you will create Postgresql and MS SQL through terraform
resource "google_sql_database_instance" "postgres_pvp_instance_name" {
name = "postgres-pvp-instance-name"
region = "asia-northeast1"
database_version = "POSTGRES_14"
root_password = "abcABC123!"
settings {
tier = "db-custom-2-7680"
password_validation_policy {
min_length = 6
reuse_interval = 2
complexity = "COMPLEXITY_DEFAULT"
disallow_username_substring = true
password_change_interval = "30s"
enable_password_policy = true
}
}
# set `deletion_protection` to true, will ensure that one cannot accidentally delete this instance by
# use of Terraform whereas `deletion_protection_enabled` flag protects this instance at the GCP level.
deletion_protection = false
}
MS SQL Terraform Code
# [START cloud_sql_sqlserver_instance_80_db_n1_s2]
resource "google_sql_database_instance" "instance" {
name = "sqlserver-instance"
region = "us-central1"
database_version = "SQLSERVER_2019_STANDARD"
root_password = "INSERT-PASSWORD-HERE"
settings {
tier = "db-custom-2-7680"
}
# set `deletion_protection` to true, will ensure that one cannot accidentally delete this instance by
# use of Terraform whereas `deletion_protection_enabled` flag protects this instance at the GCP level.
deletion_protection = false
}
# [END cloud_sql_sqlserver_instance_80_db_n1_s2]
# [START cloud_sql_sqlserver_instance_user]
resource "random_password" "pwd" {
length = 16
special = false
}
resource "google_sql_user" "user" {
name = "user"
instance = google_sql_database_instance.instance.name
password = random_password.pwd.result
}
# [END cloud_sql_sqlserver_instance_user]
Step -3
Create GCS buckets w.r.t database separately and grant write permission to SQL instance service account. Follow the process for instructions.
#Environment Variable project and bucket
export PROJECT_ID=your project name
export BUCKET_NAME=bucket-name
#create bucket
gsutil mb -p $PROJECT_ID -l asia gs://$BUCKET_NAME
export SA_NAME=$(gcloud sql instances describe your-database-instances-name --project=$PROJECT_ID --format="value(serviceAccountEmailAddress)")
#Grant write access to the service account
gsutil acl ch -u ${SA_NAME}:W gs://$BUCKET_NAME
Step-4 Create Service Account
- Cloud SQL Client (roles/cloudsql.client):
- This role allows the service account to connect to and manage Cloud SQL instances.
- Necessary for Cloud Functions to interact with Cloud SQL.
- Cloud SQL Admin (roles/cloudsql.admin) — Optional:
- If your automation involves creating or modifying Cloud SQL instances, this role might be necessary.
- Be cautious and grant the least privilege required for the specific tasks.
For Cloud Storage (to store backups):
- Storage Object Creator (roles/storage.objectCreator):
- This role allows the service account to create objects (files) in a Cloud Storage bucket.
- Necessary for storing backups in the specified bucket.
- Storage Object Viewer (roles/storage.objectViewer):
- If your Cloud Function needs to read the contents of the Cloud Storage bucket, you can grant this role.
- Only use it if necessary, and be mindful of the principle of least privilege.
- Storage Legacy Bucket Reader (roles/storage.legacyBucketReader) — Optional:
- If your Cloud Storage bucket is in a legacy bucket ACL configuration, you might need this role to read from the bucket.
- It’s recommended to migrate to IAM permissions, but if you’re still using legacy ACLs, this role might be necessary.
# Grant Cloud SQL roles
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
--member="serviceAccount:your-cloud-function-service-account@your-project.iam.gserviceaccount.com" \
--role="roles/cloudsql.client"
# Optional: Grant Cloud SQL Admin role
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
--member="serviceAccount:your-cloud-function-service-account@your-project.iam.gserviceaccount.com" \
--role="roles/cloudsql.admin"
# Grant Cloud Storage roles
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
--member="serviceAccount:your-cloud-function-service-account@your-project.iam.gserviceaccount.com" \
--role="roles/storage.objectCreator"
gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \
--member="serviceAccount:your-cloud-function-service-account@your-project.iam.gserviceaccount.com" \
--role="roles/storage.objectViewer"
gcloud functions add-iam-policy-binding YOUR_PROJECT_ID\
--region="us-central1" \
--member="serviceAccount:your-cloud-function-service-account@your-project.iam.gserviceaccount.com" \
--role="roles/storage.admin"
#Grant permission Cloud function and assign Cloud Run Invoker to the Cloud Run service.
gcloud functions add-iam-policy-binding YOUR_PROJECT_ID \
--region="us-central1" \
--member="serviceAccount:your-cloud-function-service-account@your-project.iam.gserviceaccount.com" \
--role="roles/cloudfunctions.invoker"
Step-5 Ctreate Cloud Function w.r.t Database type
To create Cloud Functions, follow the steps given below:
1. Enter the function name, select the region and environment.
2. In the trigger section, select “allow” to require authentication. However, for the purpose of this demo, I have chosen to allow unauthenticated invocations.
3. Finally, select the runtime service account which you created in Step 4.
Click on next →
In Index.js you enter the below code
const {google} = require("googleapis");
const {auth} = require("google-auth-library");
var sqladmin = google.sqladmin("v1beta4");
exports.exportDatabase = (_req, res) => {
async function doIt() {
const authRes = await auth.getApplicationDefault();
let authClient = authRes.credential;
var request = {
// Project ID of the project that contains the instance to be exported.
project:"your project name",
// Cloud SQL instance ID. This does not include the project ID.
instance: "sql-server-instance-name",
resource: {
// Contains details about the export operation.
exportContext: {
// This is always sql#exportContext.
// The file type for the specified uri (e.g. SQL or CSV)
fileType: "BAK", // CSV//SQL
/**
* The path to the file in GCS where the export will be stored.
* The URI is in the form gs://bucketName/fileName.
* If the file already exists, the operation fails.
* If fileType is SQL and the filename ends with .gz, the contents are compressed.
*/
uri: `gs://your-bucket-nmae/backup-${Date.now()}.gz`,
/**
* Databases from which the export is made.
* If fileType is SQL and no database is specified, all databases are exported.
* If fileType is CSV, you can optionally specify at most one database to export.
* If csvExportOptions.selectQuery also specifies the database, this field will be ignored.
*/
databases:["put-database-name"]
// Options for exporting data as SQL statements.
// sqlExportOptions: {
// /**
// * Tables to export, or that were exported, from the specified database.
// * If you specify tables, specify one and only one database.
// */
// tables: config.tables,
// // Export only schemas?
// schemaOnly: config.schemaOnly
// }
}
},
// Auth client
auth: authClient
};
// Kick off export with requested arguments.
sqladmin.instances.export(request, function(err, result) {
if (err) {
console.log(err);
} else {
console.log(result);
}
res.status(200).send("Command completed", err, result);
});
}
doIt();
};
and in Package .json put the below code
{
"name": "export-database",
"version": "0.0.1",
"dependencies": {
"googleapis": "^39.2.0",
"google-auth-library": "3.1.2"
}
}
If you have doubts please follow the screenshot, and make sure that the endpoint name be same as your code line number 5
Before deploying, test to ensure success by clicking the ‘test’ button first. Once confirmed, click ‘deploy’
#For CloudSQL MySQL Cloud Function Nodejs index.js Code
const {google} = require("googleapis");
const {auth} = require("google-auth-library");
var sqladmin = google.sqladmin("v1beta4");
exports.exportDatabase = (_req, res) => {
async function doBackup() {
const authRes = await auth.getApplicationDefault();
let authClient = authRes.credential;
var request = {
// Project ID
project: "kinetic-object-400913",
// Cloud SQL instance ID
instance: "demo-mysql",
resource: {
// Contains details about the export operation.
exportContext: {
// This is always sql#exportContext.
kind: "sql#exportContext",
// The file type for the specified uri (e.g. SQL or CSV)
fileType: "SQL",
/**
* The path to the file in GCS where the export will be stored.
* The URI is in the form gs://bucketName/fileName.
* If the file already exists, the operation fails.
* If fileType is SQL and the filename ends with .gz, the contents are compressed.
*/
uri:`gs://kinetic-object-400913-mysql-demo-auto-backup/backup-${Date.now()}.gz`,
/**
* Databases from which the export is made.
* If fileType is SQL and no database is specified, all databases are exported.
* If fileType is CSV, you can optionally specify at most one database to export.
* If csvExportOptions.selectQuery also specifies the database, this field will be ignored.
*/
databases: ["guestbook"]
}
},
// Auth client
auth: authClient
};
// Kick off export with requested arguments.
sqladmin.instances.export(request, function(err, result) {
if (err) {
console.log(err);
} else {
console.log(result);
}
res.status(200).send("Command completed", err, result);
}
);
}
doBackup();
};
Package.json
{
"name": "export-database",
"version": "0.0.1",
"dependencies": {
"googleapis": "^39.2.0",
"google-auth-library": "3.1.2"
}
}
For PostgreSQL Cloud Function Nodejs index.js code
const {google} = require("googleapis");
const {auth} = require("google-auth-library");
var sqladmin = google.sqladmin("v1beta4");
exports.exportDatabase = (_req, res) => {
async function doIt() {
const authRes = await auth.getApplicationDefault();
let authClient = authRes.credential;
var request = {
// Project ID of the project that contains the instance to be exported.
project:"kinetic-object-400913",
// Cloud SQL instance ID. This does not include the project ID.
instance: "demo-postgresql",
resource: {
// Contains details about the export operation.
exportContext: {
// This is always sql#exportContext.
// The file type for the specified uri (e.g. SQL or CSV)
fileType: "SQL", // CSV
/**
* The path to the file in GCS where the export will be stored.
* The URI is in the form gs://bucketName/fileName.
* If the file already exists, the operation fails.
* If fileType is SQL and the filename ends with .gz, the contents are compressed.
*/
uri:`gs://kinetic-object-400913-postgresql-demo-auto-backup/backup-${Date.now()}.gz`,
/**
* Databases from which the export is made.
* If fileType is SQL and no database is specified, all databases are exported.
* If fileType is CSV, you can optionally specify at most one database to export.
* If csvExportOptions.selectQuery also specifies the database, this field will be ignored.
*/
databases:["postgres"]
// Options for exporting data as SQL statements.
// sqlExportOptions: {
// /**
// * Tables to export, or that were exported, from the specified database.
// * If you specify tables, specify one and only one database.
// */
// tables: config.tables,
// // Export only schemas?
// schemaOnly: config.schemaOnly
// }
}
},
// Auth client
auth: authClient
};
// Kick off export with requested arguments.
sqladmin.instances.export(request, function(err, result) {
if (err) {
console.log(err);
} else {
console.log(result);
}
res.status(200).send("Command completed", err, result);
});
}
doIt();
};
Package .json
{
"name": "export-database",
"version": "0.0.1",
"dependencies": {
"googleapis": "^39.2.0",
"google-auth-library": "3.1.2"
}
}
Step -6 Create Cloud Scheduler w.r.t database type
Every hour database backup 0 * * * *
Runs on the hour. For example, 9:00 AM, 10:00 AM, 11:00 AM, and so on ….
Name schedule select region and set frequency
Next, configure the target type and put the cloud function URL execution for every hour backup
Target Type HTTP
URL: Cloud Function URL
HTTP Method : POST
Follow the below screenshots
Every hour, an automatic backup is created for all three databases and stored in GCS Buckets.
All Three Database backups are stored in GCS
Conclusion :
Backing up your Cloud SQL instances is essential for protecting your data from loss or corruption. Cloud SQL provides a built-in backup feature that allows you to automatically back up your instances to a GCS bucket. This is a reliable and cost-effective way to ensure that your data is always protected.
About Me:
I am having experienced IT professional with a passion for helping businesses embark on their journey to the cloud. With over 15+ years of industry experience, I currently serve as a Google Cloud Principal architect, assisting customers in building highly scalable and efficient solutions on the Google Cloud Platform. My expertise lies in infrastructure and zero trust security, Google Cloud networking, and cloud infrastructure building using Terraform. I hold several prestigious certifications, including Google Cloud, HashiCorp, Microsoft Azure, and Amazon AWS Certified.
Certificated :
1. Google Cloud Certified — Cloud Digital Leader.
2. Google Cloud Certified — Associate Cloud Engineer.
3. Google Cloud Certified — Professional Cloud Architect.
4. Google Cloud Certified — Professional Data Engineer.
5. Google Cloud Certified — Professional Cloud Network Engineer.
6. Google Cloud Certified — Professional Cloud Developer Engineer.
7. Google Cloud Certified — Professional Cloud DevOps Engineer.
8. Google Cloud Certified — Professional Security Engineer.
9. Google Cloud Certified — Professional Database Engineer.
10. Google Cloud Certified — Professional Workspace Administrator.
11. Google Cloud Certified — Professional Machine Learning.
12. HashiCorp Certified — Terraform Associate
13. Microsoft Azure AZ-900 Certified
14. Amazon AWS-Practitioner Certified
Helping professionals and students to Build their Cloud Careers. My responsibility is to make the cloud easy content to understand easily! Please do #like, #share and #subscribe for more amazing #googlecloud content and #googleworkspace content If you need any guidance and help feel free to connect with me
YouTube:https://www.youtube.com/@growwithgooglecloud
Topmate :https://topmate.io/gcloud_biswanath_giri
Telegram: https://t.me/growwithgcp
Twitter: https://twitter.com/bgiri_gcloud
Instagram:https://www.instagram.com/google_cloud_trainer/
LinkedIn: https://www.linkedin.com/in/biswanathgirigcloudcertified/
Facebook:https://www.facebook.com/biswanath.giri
Linktree:https://linktr.ee/gcloud_biswanath_giri
and DM me,:) I am happy to help!!
You can also schedule 121 discussions with me on topmate.io/gcloud_biswanath_giri for any Google Cloud-related query and concerns:😁