Setup GCP Cloud functions Triggering by Cloud Schedulers with Terraform

Raz Akbari
Geek Culture
Published in
9 min readSep 13, 2021

This is a step by step tutorial on how to setup GCP cloud functions, trigger them automatically by Cloud Schedulers, all with Terraform.

Rome Villa Borgese

Requirements

GCP Configuration

You probably have already some of the necessary configurations, checkout to make sure you are not missing anything.

Create a Service Account(SA)

As the name suggests, it’s an account that other services use to apply configurations in GCP. Each SA can have multiple roles which give the necessary permissions to it’s key holder. We will use this SA, let’s call it tutorial-sa for creating resources through Terraform, and Cloud Scheduler to invoke the Cloud function.

For the purpose of this tutorial assign the following roles to tutorial-sa:

  • Cloud Functions Admin: It includes cloudfunctions.functions.getIamPolicy role, we need this role to create a functional athenticated Cloud Function. You will see it in detail later on.
  • Cloud Functions Developer: to deploy, update, and delete functions
  • Cloud Functions Invoker: to be able to invoke a function
  • Cloud Functions Service Agent
  • Cloud Scheduler Admin

add a json key to it, and keep the key somewhere safe.

Create Terraform bucket (Optional)

If you want to store your Terraform state file on a remote storage instead of your local machine, you need to create a bucket on Google Cloud Storage (gcs) beforehand.

Search for Cloud Storage on your GCP project and create a bucket. Bucket name must be unique.

Enable APIs

All the resource creation, update and deletion are through a set of API calls. In GCP we need to enable these APIs beforehand, you can disable them from dashboard eventually. To activate an API, on your GCP dashboard top bar search area, type the name, from the drop down list choose the one on the market place to be transferred to the appropriate page, and click on Enable button. You need to have the following APIs enabled:

  • Cloud Build API
  • Cloud Scheduler API
  • Compute engine API
  • Cloud Function API

You can checkout all your enabled APIs in GCP APIs&Services dashboard.

Terraform Configuration

Open an empty folder in your favourite IDE, I’m going to go with Visual Studio. I suggest you install a Terraform extension on your IDE which will help you with Syntax highlight and autocompletion. On Visual Studio you can go on with Hashicorp Terraform.

Add a new terraform file, I am going to call it backend-config.tf in to your empty folder, and paste the following content.

the terraform block is for configuring Terraform itself. Let’s explore them,

terraform {
backend "gcs" {
bucket = "<bucket-name>"
prefix = "state"
}
required_version = ">= 0.12.7" required_providers {
google = {
source = "hashicorp/google"
version = "3.82.0"
}
}
}
provider "google" {
project = "<gcp_project_id>"
region = "<regione_name>"
zone = "<zone_name>"
}

Terraform Backend Configuration (Optional)

Inside Terraform block is where we can also embed our backend configuration. Terraform uses the backend configuration to determine where to store the state file and where to run the operations (API calls). If you don’t configure the backend, Terraform will use its default behaviour, meaning:

  • Terraform will store its state file on your local, within the current working directly instead of a remote storage like GCP Buckets.
  • The Api calls against the infrastructure services (Operations) to manipulate resources like Cloud Functions, will be done from local machine instead of Terrafrom Cloud or Terrafom Enterprise.

Our configuration stores Terraform state file in the GCP bucket, and make operations from local machine.

The gcs backend also supports state lock, meaning while there are changes being made to resources, it will hold a lock on the state file, so no one else can make chanages.

Terraform Required Version

We can add a constraint on the version of terraform to be used while manipulating our resources. If the running version of Terraform does not satisfy this requirement, it will produce an error without taking any actions.

It’s a good practice to setup the required version to save your state file from getting corrupt.

Provider Requirements

To work with remote systems such as CGP, Terraform relies on plugins called Providers. To enable these providers we should add them in Terraform block.

A provider requirement consists of a local name, a source location which tell Terraform from which registry it can download the plugin, and a version constraint. Provider version is optional, if it not set the latest will be used. For our Google provider you can check out this link. Click on us Use provider button to get the configuration.

Now let’s configure the provider itself. We need to pass our GCP project Id, region and zone where the resources will be created. As an example I am going with europe-west1 as region and europe-west1–a as zone. You can pass these values with Terraform variables dynamically.

Check out this link to get the list of available regions and zones in GCP. You have to enable Compute Engine API for this purpose.

Cloud Functions with Terraform

Terraform has a good documentation when it comes to resources. Here is the one on Cloud Functions for GCP.

CCP cloud functions can pull codes from buckets to run. We will store code project on a bucket and configure our Cloud Function to use it. To do so we need to create:

  • Bucket resource
  • Bucket object resource which holds our code
  • Cloud function itself

Let’s create a file called main.tf and add our resources.

Bucket

The bucket gets only a name, remember the name must be unique. you could also use the bucket that we have created for terraform to avoid creating a new one.

resource "google_storage_bucket" "bucket" {
name = "<unique-bucket-name>"
}

Bucket Object

GCP cloud functions support many languages including Go, Node.js, Java, Python,etc. It has the possibility to load a zip folder containing the code project to run. Having a Node.js application running by Cloud Function, we need to put at least the package.json and a javascript file in the zip folder. Let’s use the sample Hello World project existing as an example on the GCP Cloud Functions with a small twist.

Let’s create an index.js file that responds with a simple hello. Later on we will see where to pass the environment variable.

exports.helloWorld = (req, res) => {
let name = process.env.name;
let message = `Hello ${name}`;
res.status(200).send(message);
};

and a package.json file with:

{
"name": "sample-http",
"version": "0.0.1"
}

create an index.zip containing index.js and package.json, and put it in root directory of our project. Now let’s create the bucket object,

resource "google_storage_bucket_object" "cloud-function-archive" {
name = "index.zip"
bucket = google_storage_bucket.bucket.name
#relative path to your index.zip file from the working directory
source = "./index.zip"
}

The bucket object, gets

  • Object name,
  • Bucket name to put the file inside, if you created the “google_storage_bucket” “bucket” resource in the previous step, you can pass google_storage_bucket.bucket.name as the reference, otherwise just put a string with your already created bucket name.
  • Realtive path to index.zip file

Cloud Function

Now it’s time for our Cloud function

resource "google_cloudfunctions_function" "function" {
name = "terraform-tutorial"
description = "Hello World example"
runtime = "nodejs14"

available_memory_mb = 128
# Gets a string bucket name or a reference to a resource
source_archive_bucket = google_storage_bucket.bucket.name
source_archive_object = google_storage_bucket_object.cloud-function-archive.name
trigger_http = true
entry_point = "helloWorld"
environment_variables = {
name= "terraform"
}
}

Name and runtime are required. Notice the runtime being set to nodejs14, you can get all the available runtimes here.

If you have a “google_storage_bucket” “bucket” resource created in earlier steps, pass the name attribute as a reference to source_archive_bucket, otherwise put a string with the bucket name. Add a reference for the bucket object as well.

Trigger_http set to true means, the function will be triggered through an http call to its endpoint. Later on we will use Cloud Schedulers to trigger the function through the http call. Post, Get, Put, Delete and Options are supported http calls.

As the name suggests, entry_point is the name of the function in the code executed when the Cloud Function is triggered.

we also pass the environment variables for our Cloud Function, as shown in the code block. This is the variable we are using in index.js file.

Now let’s move on to the invocation regulation part of our Cloud function

resource "google_cloudfunctions_function_iam_member" "invoker" {  project = google_cloudfunctions_function.function.project
region = google_cloudfunctions_function.function.region
cloud_function = google_cloudfunctions_function.function.name
role = "roles/cloudfunctions.invoker"
member = "serviceAccount:<terraform_sa_email>"
}

There are two policies when it comes to our Cloud Function Invocation. AllUsers authorisation or a single user authorisation, we choose latter. This configuration gives permission only to our terrfaorm-sa to invoke this specific Cloud Function.

Having chosen AllUsers policy, the Cloud function invocation will be public to all members.

Cloud Schedulers With Terraform

Cloud schedulers run jobs. In our case we want to create a job that triggers an http call towards our Cloud Function. Here is the documentation on this resource.

Create an App Engine App

GCP App Engine is where we can have our serverless applications running. To have a Cloud Scheduler, our GCP project must have an App Engine app. Go to App Engine and click on Create App. No need for further configuration.

Cloud Scheduler

We are going to create a job that every 2 minutes triggers an http call towards our cloud function.Copy and paste the following in the main.tf

resource "google_cloud_scheduler_job" "hellow-world-job" {
name = "terraform-tutorial"
description = "Hello World every 2minutes"
schedule = "0/2 * * * *"
http_target {
http_method = "GET"
uri = google_cloudfunctions_function.function.https_trigger_url
oidc_token {
service_account_email = "<terraform-sa-email>"
}
}

Notice the http_target, the uri attribute gets a reference to our Cloud function resource in the main.tf file.

Let’s talk about the oidc_token. we use this token type when we are making API calls towards GCP endpoint which do not end with *.googleapis.com. Since our Cloud function trigger_url is not one of them, oidc_token suites us.

This Cloud Scheduler has the permission to invok our Cloud function because earlier we configured the Cloud function Invoker Iam Policy to accept terraform-sa service account http calls.

Let’s Run It

There are a series of step to initialise and run our project.

terraform init

This command initialises terraform directory, it downloads the provider and stores it in a hidden directory called .terraform. It also creates the state file. If Terrafrom backend is set to remote like gcp, the file will be in the configured bucket, otherwise you will see it in your working directory. Before running the command we have to make sure our GCP provider is authorised to make API calls towards the GCP project.

GCP Provider Authorisation

We give authorisation to our GCP provider by supplying our Service Account key to Terraform. To do so, we need to set a environment variable.

export GOOGLE_APPLICATION_CREDENTIALS="<absolute_path_to_service_account_key>"

Now let’s try again terraform init, it will successfully initialises terraform. Checking out our GCP bucket we will see under our defined directory path, default.tfstate file is stored.

terraform plan

This command is like a dry run on client side, meaning it shows all the changes to your infrastructure. In case there is a syntax error this command will point it out.

terraform apply

Now let’s apply our changes. Terraform will create four resources, you should see them being create one bye one.

And we are done.

Result Verification

On GCP dashboard you will have a Cloud Scheduler with a success run of a Cloud function. Check out the trigger url on Scheduler also the authenticated part on Cloud Function to see the effect of attributes we set on Terraform Side.

You can see also the logs on both Scheduler and Cloud Function to see the success result.

If you want to keep the resources, Pause the Cloud Scheduler, so it will not invoke Cloud Function every two minutes.

Here is the Github link of the project

Clean UP

To clean up

Terraform destroy

Run terraform destroy command. If you have made manual changes on GCP dashboard to the resources that terraform has created, it could give you an error running the destroy command. This is because now you have discrepancy between Terraform state file and your infrastructure resources.

You can either roll back on GCP, or manually fix the state file by changing the json values in the file or removing totally the problematic resource from the file, and eventually manually removing your resource from GCP dashboard.

You can read more on Teraform destroy here: https://spacelift.io/blog/how-to-destroy-terraform-resources

Remove Buckets

All the manually created resources including the Terraform state file bucket should also be removed manually.

Disable APIs

You can also disable all the APIs we enabled at the beginning. As far as I know the API calls are free up to some specific quotas, although I have not checked out all of the costing details on them.

--

--