Least Privilege : Create Service Accounts with Custom Roles using modular Terraform, Terragrunt and Cloud Build

Mazlum Tosun
Google Cloud - Community
19 min readJan 5, 2023

The goal of this article is to create Service Accounts with Custom Roles in Google Cloud using CI CD pipeline and Infrastructure As Code.

The use of custom roles is interesting because it allows us to follow the least privilege principle, which is to set only the necessary permissions.

1. Explanation of the use case presented in this article

The infrastructure we want to manage in Google Cloud is Service Accounts containing predefined and custom roles.

The tools chosen for this use case are :

  • Cloud Build to trigger CI CD pipelines
  • Terraform for Infrastructure As Code
  • Terragrunt to create the infrastructure with Terraform modules and prevent code duplication with DRY concept (don’t repeat yourself)

Below you can see the use case diagram of this article :

I also created a video on this topic in my GCP Youtube channel, please subscribe to the channel to support my work for the Google Cloud community :

English version

French version

2. Structure of the project

The project name is sa-custom-roles-gcp-terraform

Infrastructure part :

  • The root folder containing all the Terraform modules is infra
  • There are 2 Terraform modules : custom_roles and service_accounts
  • Each Terraform module contains all the needed tf files
  • A terragrunt.hcl file is proposed in the root infra folder and within each Terraform module

CI CD part :

  • 3 different files are proposed for CI CD pipeline with Cloud Build
  • Plan => terraform-plan-modules.yaml
  • Apply => terraform-apply-modules.yaml
  • Destroy => terraform-destroy-modules.yaml

3. Deep dive on the infrastructure Terraform part

To work correctly the custom_role module must be created before the service_accounts module.

We will explain later how to guarantee this in the Terragrunt part.

3.1 custom_roles module

This module has the responsability to create custom roles.

Configuration of custom roles

We configure all the custom roles to create in the resource/custom_roles.json file :

{
"customRoles": {
"composer": {
"roleId": "sa.composer",
"title": "SA Composer",
"description": "Least privilege role for SA dedicated to Composer",
"permissions": [
"bigquery.jobs.create",
"bigquery.datasets.get",
"bigquery.tables.export",
"bigquery.tables.get",
"bigquery.tables.create",
"bigquery.tables.getData",
"bigquery.tables.update",
"bigquery.tables.updateData",
"dataflow.jobs.create",
"dataflow.jobs.get",
"dataflow.messages.list",
"dataflow.jobs.list",
"iam.serviceAccounts.getIamPolicy",
"composer.dags.execute",
"composer.dags.get",
"composer.dags.list",
"composer.environments.create",
"composer.environments.delete",
"composer.environments.get",
"composer.environments.list",
"composer.environments.update",
"composer.imageversions.list",
"composer.operations.delete",
"composer.operations.get",
"composer.operations.list",
"artifactregistry.repositories.get",
"artifactregistry.repositories.list",
"storage.objects.create",
"storage.objects.get",
"storage.objects.getIamPolicy",
"storage.objects.list",
"storage.objects.setIamPolicy",
"storage.objects.update",
"secretmanager.versions.access"
],
"members": []
},
"dataflow": {
"roleId": "sa.dataflow",
"title": "SA Dataflow",
"description": "Least privilege role for SA dedicated to Dataflow",
"permissions": [
"dataflow.jobs.create",
"dataflow.jobs.get",
"dataflow.jobs.list",
"dataflow.messages.list",
"storage.objects.create",
"storage.objects.delete",
"storage.objects.get",
"storage.objects.list",
"storage.objects.update",
"bigquery.datasets.getIamPolicy",
"bigquery.models.export",
"bigquery.models.getData",
"bigquery.models.getMetadata",
"bigquery.models.list",
"bigquery.tables.updateData",
"bigquery.tables.createSnapshot",
"bigquery.tables.export",
"bigquery.tables.get",
"bigquery.tables.getData",
"bigquery.tables.list",
"resourcemanager.projects.get",
"bigquery.bireservations.get",
"bigquery.capacityCommitments.get",
"bigquery.capacityCommitments.list",
"bigquery.config.get",
"bigquery.datasets.get",
"bigquery.datasets.create",
"bigquery.jobs.create",
"bigquery.jobs.list",
"bigquery.models.list",
"bigquery.readsessions.create",
"bigquery.readsessions.getData",
"bigquery.readsessions.update",
"bigquery.reservationAssignments.list",
"bigquery.reservationAssignments.search",
"bigquery.reservations.get",
"bigquery.reservations.list",
"bigquery.savedqueries.get",
"bigquery.savedqueries.list",
"bigquery.tables.list",
"bigquery.transfers.get",
"bigquerymigration.translation.translate",
"resourcemanager.projects.get",
"autoscaling.sites.readRecommendations",
"autoscaling.sites.writeMetrics",
"autoscaling.sites.writeState",
"compute.instanceGroupManagers.update",
"compute.instances.delete",
"compute.instances.setDiskAutoDelete",
"dataflow.jobs.get",
"dataflow.shuffle.read",
"dataflow.shuffle.write",
"dataflow.streamingWorkItems.commitWork",
"dataflow.streamingWorkItems.getData",
"dataflow.streamingWorkItems.getWork",
"dataflow.workItems.lease",
"dataflow.workItems.sendMessage",
"dataflow.workItems.update",
"logging.logEntries.create",
"storage.buckets.get",
"storage.objects.create",
"storage.objects.get"
],
"members": []
}
}
}

2 custom roles will be created :

  • One for Cloud Composer
  • The other for Dataflow

Each role proposes the needed fields :

  • roleId
  • title
  • description
  • permissions

This way of using json configuration is readable, easily updatable and maintainable. It keeps us from repeating the same Terraform resources multiple times.

We could also have used the same concept with Terraform variables but we preferred using Json configuration.

Loading configured custom roles in locals.tf file

The custom roles configured previously in the Json file will be retrieved in the locals.tf file as follow :

locals {
custom_roles = jsondecode(file("${path.module}/resource/custom_roles.json"))["customRoles"]
}

The file and jsondecode Terraform functions interprets the given file to a json object. We can then retrieve the ["customRoles"] node.

The result structure for custom_roles local variable is a Map of Map with Terraform

Module variables in variables.tf file

The module variables are the project_id and the env

The env variable is not used but we keep it for the example and to simulate a real use case, because sometimes different behaviour could be applied according to the environment.

variable "project_id" {
description = "Project ID, used to enforce providing a project id"
type = string
}

variable "env" {
description = "Current env"
type = string
}

Creation of custom roles in main.tf file

The main.tf file proposes the resource allowing the creation of custom roles retrieved previously in locals.tf file :

resource "google_project_iam_custom_role" "custom_roles" {
for_each = local.custom_roles

project = var.project_id
role_id = each.value["roleId"]
title = each.value["title"]
description = each.value["description"]
permissions = each.value["permissions"]
}

This resource uses a for_each on the custom roles local variable as Map : local.custom_roles

Then we can access to each field of the Map with each.value["fieldName"]

This allows having a single Terraform resource for all the custom roles.

3.2 service_accounts module

This module has the responsability to create service accounts containing the custom roles created previously.

Configuration of service accounts

We configure all the service accounts to create with roles in the resource/service_accounts.json :

{
"serviceAccounts": {
"sa-composer": {
"account_id": "sa-composer",
"display_name": "SA for Composer",
"owner_role": "roles/iam.serviceAccountAdmin",
"owner_email": "user:mazlum.tosun@gmail.com",
"roles": [
"projects/{{PROJECT_ID}}/roles/sa.composer",
"roles/composer.serviceAgent",
"roles/composer.ServiceAgentV2Ext"
]
},
"sa-dataflow": {
"account_id": "sa-dataflow",
"display_name": "SA for Dataflow",
"owner_role": "roles/iam.serviceAccountAdmin",
"owner_email": "user:mazlum.tosun@gmail.com",
"roles": [
"projects/{{PROJECT_ID}}/roles/sa.dataflow"
]
}
}
}

2 service accounts will be created :

  • One for Cloud Composer
  • The other for Dataflow

A list of roles is assigned to each service account. We can mix custom and predefined roles.

Each service account proposes the needed fields :

  • account_id
  • display_name
  • owner_role
  • owner_email
  • roles

Custom roles paths contain the {{PROJECT_ID}} placeholder that will be replaced by interpolation in the HCL code and main.tf file.

Example :

projects/{{PROJECT_ID}}/roles/sa => projects/your_project/roles/sa

Loading configured service accounts in locals.tf file

The services accounts configured previously in the Json file will be retrieved in the locals.tf

It uses the same concept shown for custom_roles module to retrieve service accounts from Json file :

locals {
service_accounts = jsondecode(file("${path.module}/resource/service_accounts.json"))["serviceAccounts"]
sa_roles_flattened = flatten([
for sa in local.service_accounts : [
for role in sa["roles"] : {
account_id = sa["account_id"]
display_name = sa["display_name"]
owner_role = sa["owner_role"]
owner_email = sa["owner_email"]
role = role
}
]
])
}
  • The service_accounts Map is retrieved from Json file
  • A new local variable sa_roles_flattened is created and allows to flatten the roles with the linked service account, example :

The input for Composer SA is :

"sa-composer": {
"account_id": "sa-composer",
"display_name": "SA for Composer",
"owner_role": "roles/iam.serviceAccountAdmin",
"owner_email": "user:mazlum.tosun@gmail.com",
"roles": [
"projects/{{PROJECT_ID}}/roles/sa.composer",
"roles/composer.serviceAgent",
"roles/composer.ServiceAgentV2Ext"
]
}
}

The result for sa_roles_flattened with Terraform is :

[
{
account_id = "sa-composer"
display_name = "SA for Composer"
owner_role = "roles/iam.serviceAccountAdmin"
owner_email = "user:mazlum.tosun@gmail.com"
role = "projects/{{PROJECT_ID}}/roles/sa.composer"
}
{
account_id = "sa-composer"
display_name = "SA for Composer"
owner_role = "roles/iam.serviceAccountAdmin"
owner_email = "user:mazlum.tosun@gmail.com"
role = "roles/composer.serviceAgent"
}
{
account_id = "sa-composer"
display_name = "SA for Composer"
owner_role = "roles/iam.serviceAccountAdmin"
owner_email = "user:mazlum.tosun@gmail.com"
role = "roles/composer.ServiceAgentV2Ext"
}
]

It’s worth noting that I used user:mazlum.tosun@gmail.com as owner for this example but it’s usually better using Google Group instead. Indeed with a group, we have the flexibility to add or remove members very simply.

Creation of service acounts in main.tf file

The main.tf file proposes the resource allowing the creation of service accounts with roles, retrieved previously in the locals.tf file :

resource "google_service_account" "sa_list" {
project = var.project_id
for_each = local.service_accounts

account_id = each.value["account_id"]
display_name = each.value["display_name"]
}

resource "google_project_iam_member" "sa_roles" {
for_each = {for idx, sa in local.sa_roles_flattened : "${sa["account_id"]}_${sa["role"]}" => sa}
project = var.project_id
role = replace(each.value["role"], "{{PROJECT_ID}}", var.project_id)
member = "serviceAccount:${each.value["account_id"]}@${var.project_id}.iam.gserviceaccount.com"
depends_on = [google_service_account.sa_list]
}

resource "google_service_account_iam_member" "admin_account_iam" {
for_each = local.service_accounts
service_account_id = "projects/${var.project_id}/serviceAccounts/${each.value["account_id"]}@${var.project_id}.iam.gserviceaccount.com"
role = each.value["owner_role"]
member = each.value["owner_email"]
depends_on = [google_service_account.sa_list]
}

google_service_account/sa_list resource :

This resource creates all the service accounts with a for_each over local.service_accounts variable.

google_project_iam_member/sa_roles resource :

This resource assigns all the roles to service accounts with a for_each over local.sa_roles_flattened variable.

The following code allows to transform the local.sa_roles_flattened list to a Map with an unique key and sa element as value : sa[account_id]_sa[role] => sa

for_each   = {for idx, sa in local.sa_roles_flattened : "${sa["account_id"]}_${sa["role"]}" => sa}

This resource depends on sa_list

google_service_account_iam_member/admin_account_iam resource :

This resource assigns the owner to service accounts with a for_each over local.service_accounts variable.

This resource depends on sa_list

In these resources, the placeholder {{PROJECT_ID}} character from Json configuration is replaced by var.project in the HCL code.

4. Deep dive on the infrastructure Terragrunt part

Terragrunt is a thin wrapper on Terraform that provides extra tools for keeping your configurations DRY (don’t repeat yourself), working with multiple Terraform modules, and managing remote state.

In this use case, we chose to have 2 separate Terraform modules and as mentioned above, the custom_roles module must be created before the service_accounts module.

Terragrunt allows to plan/apply/destroy multiple modules at once with one command and set dependencies between them.

Let’s take a look at the structure with Terragrunt configuration files :

You can see different terragrunt.hcl files :

  • One at the root of infra folder
  • One per module

4.1 The terragrunt.hcl file at the root : infra/terragrunt.hcl

This is the root config file and it applies behaviour for all the modules :

remote_state {
backend = "gcs"
generate = {
path = "backend.tf"
if_exists = "overwrite"
}
config = {
bucket = get_env("TF_STATE_BUCKET")
prefix = "${get_env("TF_STATE_PREFIX")}/${path_relative_to_include()}"
}
}

generate "versions" {
path = "versions.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
required_version = ">= 0.13.2"

required_providers {
google = "${get_env("GOOGLE_PROVIDER_VERSION")}"
}
}
EOF
}

You can see 2 blocks : remote_state and generate “versions”

remote_state bloc :

Allows to centralize in one place the Terraform remote state and use the DRY concept. The remote state is gcs for the 2 modules.

generate inside indicates the file path : backend.tf

If a backend.tf file already exists in a child module, it will replaced with the current file by Terragrunt

generate = {
path = "backend.tf"
if_exists = "overwrite"
}

config bloc inside configures the Terraform prefix and bucket containing the remote states. These parameters was set with env variables provided by Cloud Build

We will explain that in more depth in the chapter dedicated to Cloud Build.

config = {
bucket = get_env("TF_STATE_BUCKET")
prefix = "${get_env("TF_STATE_PREFIX")}/${path_relative_to_include()}"
}

path_relative_to_include() is the folder name of the module that imports the parent hcl file, example with custom_roles : my_prefix_folder/custom_roles

generate “versions” bloc :

This bloc allows to generate the versions.tf file for each module. If this file already exists in the module, it will replaced with the current by Terragrunt via : if_exists=overwrite_terragrunt

This technic based on generate is really interesting because it allows to generate the needed file in each module very easily without having to duplicate it.

The Google Cloud provider version is retrieved from an environement variable provided by Cloud Build

4.2 The terragrunt.hcl file in the custom_roles module : infra/custom_role/terragrunt.hcl

include "root" {
path = find_in_parent_folders()
}

This syntax allows to include the parent terragrunt.hcl file in the current child module. It will import the remote_state and generate the versions.tf file.

4.3 The terragrunt.hcl file in the service_accounts module : infra/service_accounts/terragrunt.hcl

include "root" {
path = find_in_parent_folders()
}

dependencies {
paths = ["../custom_roles"]
}

The first bloc, as explained above, imports the parent hcl to the child module.

dependencies indicates that the current module depends on custom_roles module and handles the modules in the correct order :

  • Plan and apply the module custom_roles before the service_accounts
  • Destroy the module service_accounts before the custom_roles

5. Deep dive on the CI CD Cloud Build part

The project contains 3 Cloud Build yaml files : plan, apply and destroy

The goal will be the creation of 3 manual tasks :

  • Launch a manual task for each operation from the current directory and gcloud builds summit command. We want showing how to use these kind of manual executions with Cloud Build
  • Create manual triggers for each operation and launch them from Cloud Build UI. It’s the final goal we want to achieve for our CI CD.

We chose manual tasks with Terraform and infrastructure because it’s sensitive and we want to control the plan logs before to launch the apply.

The destroy gives the possibilty to clean all the resources created for this use case.

5.1 The Terraform plan file

The plan logic is handled in the terraform-plan-modules.yaml file :

steps:
- name: alpine/terragrunt:1.3.6
script: |
terragrunt run-all init
terragrunt run-all plan --out tfplan.out
dir: 'infra'
env:
- 'TF_VAR_project_id=$PROJECT_ID'
- 'TF_VAR_env=$_ENV'
- 'TF_STATE_BUCKET=$_TF_STATE_BUCKET'
- 'TF_STATE_PREFIX=$_TF_STATE_PREFIX'
- 'GOOGLE_PROVIDER_VERSION=$_GOOGLE_PROVIDER_VERSION'

There is one step from alpine/terragrunt:1.3.6 Docker image.

The script bloc contains all the commands to execute :

  • terragrunt run-all init : executes a Terraform init for all the modules (Terraform, context, downloading providers…)
  • terragrunt run-all plan --out tfplan.out : executes a Terraform plan for all the modules and generates the result in a tfplan.out file

The dir tag allows to set the working directory for the current step : infra root folder

The env tag allows to set environment variables for the current step

All the values are passed by the Cloud Build job with variables substitutions :

The $PROJECT_ID variable is a default substitution.

  • TF_VAR_project_id : set the project ID as environment and Terraform variables.
  • TF_VAR_env : set the current env (dev for example) as environment and Terraform variables.
  • TF_STATE_BUCKET : set the Terraform bucket as an environment variable
  • TF_STATE_PREFIX : set the Terraform prefix as an environment variable
  • GOOGLE_PROVIDER_VERSION : set the Terraform Google provider version as an environment variable

5.2 The Terraform apply file

The apply logic is handled in the terraform-apply-modules.yaml file :

steps:
- name: alpine/terragrunt:1.3.6
script: |
terragrunt run-all init
terragrunt run-all plan --out tfplan.out
terragrunt run-all apply --terragrunt-non-interactive tfplan.out
dir: 'infra'
env:
- 'TF_VAR_project_id=$PROJECT_ID'
- 'TF_VAR_env=$_ENV'
- 'TF_STATE_BUCKET=$_TF_STATE_BUCKET'
- 'TF_STATE_PREFIX=$_TF_STATE_PREFIX'
- 'GOOGLE_PROVIDER_VERSION=$_GOOGLE_PROVIDER_VERSION'

All the logic is the same as the plan part but with one more command :

terragrunt run-all apply --terragrunt-non-interactive tfplan.out

This command executes the apply and create the infrastructure in Google Cloud

The apply is based on the tfplan.out file, generated by the plan launched just before for each module.

5.2 The Terraform destroy file

The destroy logic is handled in the terraform-destroy-modules.yaml file :

steps:
- name: alpine/terragrunt:1.3.6
script: |
terragrunt run-all init
terragrunt run-all destroy --terragrunt-non-interactive
dir: 'infra'
env:
- 'TF_VAR_project_id=$PROJECT_ID'
- 'TF_VAR_env=$_ENV'
- 'TF_STATE_BUCKET=$_TF_STATE_BUCKET'
- 'TF_STATE_PREFIX=$_TF_STATE_PREFIX'
- 'GOOGLE_PROVIDER_VERSION=$_GOOGLE_PROVIDER_VERSION'

The logic is the same as for the previous parts but with the dedicated command for the destroy :

terragrunt run-all destroy --terragrunt-non-interactive

5.3 Cloud Build Service Account

To create triggers and submit jobs, you have to be authenticated in gcloud cli with an identity having Cloud Build Editor role.

Then by default, at runtime, Cloud Build uses the default service account dedicated to builds :

[PROJECT_NUMBER]@cloudbuild.gserviceaccount.com

We could have used a User-specified service account, but for the sake of simplicity, we have chosen to use the default SA.

If you want more details on Cloud Build service accounts, you can check this link from the official documentation :

To be able to create the infra described in this article, we gave the following role to the Cloud Build default service account :

We could have given custom roles to Cloud Build SA with only the necessary privileges and follow the least privilege principle, but that’s not the focus of this article, that’s why we chose predefined roles.

5.4 Build manually from the current directory

To submit a build with Cloud Build manually from the current directory, we use gcloud builds submit

Behind the scene, Cloud Build creates and uses a GCS bucket for these kind of builds with the following naming convention : {project_id}_cloudbuild

We could also have passed our own bucket to Cloud Build

Export the following env variables in your Shell session and replace the placeholders by your values :

export PROJECT_ID={{your_project_id}}
export LOCATION={{your_location}}
export TF_STATE_BUCKET={{your_tf_state_bucket}}
export TF_STATE_PREFIX={{your_tf_state_prefix}}
export GOOGLE_PROVIDER_VERSION="= 4.47.0"

Example with the values used for this project :

export PROJECT_ID=gb-poc-373711
export LOCATION=europe-west1
export TF_STATE_BUCKET=gb-poc-terraform-state
export TF_STATE_PREFIX=testmazlum
export GOOGLE_PROVIDER_VERSION="= 4.47.0"

Plan :

gcloud builds submit \
--project=$PROJECT_ID \
--region=$LOCATION \
--config terraform-plan-modules.yaml \
--substitutions _ENV=dev,_TF_STATE_BUCKET=$TF_STATE_BUCKET,_TF_STATE_PREFIX=$TF_STATE_PREFIX,_GOOGLE_PROVIDER_VERSION=$GOOGLE_PROVIDER_VERSION \
--verbosity="debug" .

Apply :

gcloud builds submit \
--project=$PROJECT_ID \
--region=$LOCATION \
--config terraform-apply-modules.yaml \
--substitutions _ENV=dev,_TF_STATE_BUCKET=$TF_STATE_BUCKET,_TF_STATE_PREFIX=$TF_STATE_PREFIX,_GOOGLE_PROVIDER_VERSION=$GOOGLE_PROVIDER_VERSION \
--verbosity="debug" .

Destroy :

gcloud builds submit \
--project=$PROJECT_ID \
--region=$LOCATION \
--config terraform-destroy-modules.yaml \
--substitutions _ENV=dev,_TF_STATE_BUCKET=$TF_STATE_BUCKET,_TF_STATE_PREFIX=$TF_STATE_PREFIX,_GOOGLE_PROVIDER_VERSION=$GOOGLE_PROVIDER_VERSION \
--verbosity="debug" .

All the commands use the same options :

  • project
  • region (example with europe-west1)
  • config : the Cloud Build yaml file
  • substitutions : all the variables substitutions mentioned above and used in the yaml files
  • verbosity : with debug value more logs are displayed

5.5 Build from the project hosted in a Github repository

In this section, we will show how to build manual triggers from the project hosted on a Github repository :

In the Cloud Build home page, you can manage connections and create a new connection to this repository :

  • Click on MANAGE REPOSITORIES button
  • Click on CONNECT REPOSITORY button
  • Select Github option
  • Connect to your Github repository
  • Select the consent option
  • Click on DONE button
  • We can see the created connection to the Github repository

Plan :

gcloud beta builds triggers create manual \
--project=$PROJECT_ID \
--region=$LOCATION \
--name="terraform-plan" \
--repo="https://github.com/tosun-si/sa-custom-roles-gcp-terraform" \
--repo-type="GITHUB" \
--branch="main" \
--build-config="terraform-plan-modules.yaml" \
--substitutions _ENV=dev,_TF_STATE_BUCKET=$TF_STATE_BUCKET,_TF_STATE_PREFIX=$TF_STATE_PREFIX,_GOOGLE_PROVIDER_VERSION=$GOOGLE_PROVIDER_VERSION \
--verbosity="debug"

Apply :

gcloud beta builds triggers create manual \
--project=$PROJECT_ID \
--region=$LOCATION \
--name="terraform-apply" \
--repo="https://github.com/tosun-si/sa-custom-roles-gcp-terraform" \
--repo-type="GITHUB" \
--branch="main" \
--build-config="terraform-apply-modules.yaml" \
--substitutions _ENV=dev,_TF_STATE_BUCKET=$TF_STATE_BUCKET,_TF_STATE_PREFIX=$TF_STATE_PREFIX,_GOOGLE_PROVIDER_VERSION=$GOOGLE_PROVIDER_VERSION \
--verbosity="debug"

Destroy :

gcloud beta builds triggers create manual \
--project=$PROJECT_ID \
--region=$LOCATION \
--name="terraform-destroy" \
--repo="https://github.com/tosun-si/sa-custom-roles-gcp-terraform" \
--repo-type="GITHUB" \
--branch="main" \
--build-config="terraform-destroy-modules.yaml" \
--substitutions _ENV=dev,_TF_STATE_BUCKET=$TF_STATE_BUCKET,_TF_STATE_PREFIX=$TF_STATE_PREFIX,_GOOGLE_PROVIDER_VERSION=$GOOGLE_PROVIDER_VERSION \
--verbosity="debug"

All the commands use the same options :

  • project
  • region
  • name : the trigger name
  • repo : the link to the repo
  • repo-type : GITHUB in this case
  • branch : Github branch name to use
  • build-config : the Cloud Build yaml file
  • substitutions : all the variables substitutions used in the yaml files
  • verbosity : debug

After the execution of these commands, the triggers will appear in the Cloud Build triggers page

The click on RUN button will display the trigger detail, example for the plan :

We can see all the substitutions and the repo branch in editable fields.

The RUN TRIGGER button will launch the plan trigger.

All the launched builds appear in the Cloud Build history page :

The plan logs indicates that the infra will be created as needed without unexpected behaviour :

Terraform will perform the following actions:

# google_project_iam_custom_role.custom_roles["composer"] will be created
+ resource "google_project_iam_custom_role" "custom_roles" {
+ deleted = (known after apply)
+ description = "Least privilege role for SA dedicated to Composer"
+ id = (known after apply)
+ name = (known after apply)
+ permissions = [
+ "artifactregistry.repositories.get",
+ "artifactregistry.repositories.list",
+ "bigquery.datasets.get",
+ "bigquery.jobs.create",
+ "bigquery.tables.create",
+ "bigquery.tables.export",
+ "bigquery.tables.get",
+ "bigquery.tables.getData",
+ "bigquery.tables.update",
+ "bigquery.tables.updateData",
+ "composer.dags.execute",
+ "composer.dags.get",
+ "composer.dags.list",
+ "composer.environments.create",
+ "composer.environments.delete",
+ "composer.environments.get",
+ "composer.environments.list",
+ "composer.environments.update",
+ "composer.imageversions.list",
+ "composer.operations.delete",
+ "composer.operations.get",
+ "composer.operations.list",
+ "dataflow.jobs.create",
+ "dataflow.jobs.get",
+ "dataflow.jobs.list",
+ "dataflow.messages.list",
+ "iam.serviceAccounts.getIamPolicy",
+ "secretmanager.versions.access",
+ "storage.objects.create",
+ "storage.objects.get",
+ "storage.objects.getIamPolicy",
+ "storage.objects.list",
+ "storage.objects.setIamPolicy",
+ "storage.objects.update",
]
+ project = "gb-poc-373711"
+ role_id = "sa.composer"
+ stage = "GA"
+ title = "SA Composer"
}

# google_project_iam_custom_role.custom_roles["dataflow"] will be created
+ resource "google_project_iam_custom_role" "custom_roles" {
+ deleted = (known after apply)
+ description = "Least privilege role for SA dedicated to Dataflow"
+ id = (known after apply)
+ name = (known after apply)
+ permissions = [
+ "autoscaling.sites.readRecommendations",
+ "autoscaling.sites.writeMetrics",
+ "autoscaling.sites.writeState",
+ "bigquery.bireservations.get",
+ "bigquery.capacityCommitments.get",
+ "bigquery.capacityCommitments.list",
+ "bigquery.config.get",
+ "bigquery.datasets.create",
+ "bigquery.datasets.get",
+ "bigquery.datasets.getIamPolicy",
+ "bigquery.jobs.create",
+ "bigquery.jobs.list",
+ "bigquery.models.export",
+ "bigquery.models.getData",
+ "bigquery.models.getMetadata",
+ "bigquery.models.list",
+ "bigquery.readsessions.create",
+ "bigquery.readsessions.getData",
+ "bigquery.readsessions.update",
+ "bigquery.reservationAssignments.list",
+ "bigquery.reservationAssignments.search",
+ "bigquery.reservations.get",
+ "bigquery.reservations.list",
+ "bigquery.savedqueries.get",
+ "bigquery.savedqueries.list",
+ "bigquery.tables.createSnapshot",
+ "bigquery.tables.export",
+ "bigquery.tables.get",
+ "bigquery.tables.getData",
+ "bigquery.tables.list",
+ "bigquery.tables.updateData",
+ "bigquery.transfers.get",
+ "bigquerymigration.translation.translate",
+ "compute.instanceGroupManagers.update",
+ "compute.instances.delete",
+ "compute.instances.setDiskAutoDelete",
+ "dataflow.jobs.create",
+ "dataflow.jobs.get",
+ "dataflow.jobs.list",
+ "dataflow.messages.list",
+ "dataflow.shuffle.read",
+ "dataflow.shuffle.write",
+ "dataflow.streamingWorkItems.commitWork",
+ "dataflow.streamingWorkItems.getData",
+ "dataflow.streamingWorkItems.getWork",
+ "dataflow.workItems.lease",
+ "dataflow.workItems.sendMessage",
+ "dataflow.workItems.update",
+ "logging.logEntries.create",
+ "resourcemanager.projects.get",
+ "storage.buckets.get",
+ "storage.objects.create",
+ "storage.objects.delete",
+ "storage.objects.get",
+ "storage.objects.list",
+ "storage.objects.update",
]
+ project = "gb-poc-373711"
+ role_id = "sa.dataflow"
+ stage = "GA"
+ title = "SA Dataflow"
}

Plan: 2 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: tfplan.out

To perform exactly these actions, run the following command to apply:
terraform apply "tfplan.out"

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create

Terraform will perform the following actions:

# google_project_iam_member.sa_roles["sa-composer_projects/{{PROJECT_ID}}/roles/sa.composer"] will be created
+ resource "google_project_iam_member" "sa_roles" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = "serviceAccount:sa-composer@gb-poc-373711.iam.gserviceaccount.com"
+ project = "gb-poc-373711"
+ role = "projects/gb-poc-373711/roles/sa.composer"
}

# google_project_iam_member.sa_roles["sa-composer_roles/composer.ServiceAgentV2Ext"] will be created
+ resource "google_project_iam_member" "sa_roles" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = "serviceAccount:sa-composer@gb-poc-373711.iam.gserviceaccount.com"
+ project = "gb-poc-373711"
+ role = "roles/composer.ServiceAgentV2Ext"
}

# google_project_iam_member.sa_roles["sa-composer_roles/composer.serviceAgent"] will be created
+ resource "google_project_iam_member" "sa_roles" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = "serviceAccount:sa-composer@gb-poc-373711.iam.gserviceaccount.com"
+ project = "gb-poc-373711"
+ role = "roles/composer.serviceAgent"
}

# google_project_iam_member.sa_roles["sa-dataflow_projects/{{PROJECT_ID}}/roles/sa.dataflow"] will be created
+ resource "google_project_iam_member" "sa_roles" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = "serviceAccount:sa-dataflow@gb-poc-373711.iam.gserviceaccount.com"
+ project = "gb-poc-373711"
+ role = "projects/gb-poc-373711/roles/sa.dataflow"
}

# google_service_account.sa_list["sa-composer"] will be created
+ resource "google_service_account" "sa_list" {
+ account_id = "sa-composer"
+ disabled = false
+ display_name = "SA for Composer"
+ email = (known after apply)
+ id = (known after apply)
+ member = (known after apply)
+ name = (known after apply)
+ project = "gb-poc-373711"
+ unique_id = (known after apply)
}

# google_service_account.sa_list["sa-dataflow"] will be created
+ resource "google_service_account" "sa_list" {
+ account_id = "sa-dataflow"
+ disabled = false
+ display_name = "SA for Dataflow"
+ email = (known after apply)
+ id = (known after apply)
+ member = (known after apply)
+ name = (known after apply)
+ project = "gb-poc-373711"
+ unique_id = (known after apply)
}

# google_service_account_iam_member.admin_account_iam["sa-composer"] will be created
+ resource "google_service_account_iam_member" "admin_account_iam" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = "user:mazlum.tosun@gmail.com"
+ role = "roles/iam.serviceAccountAdmin"
+ service_account_id = "projects/gb-poc-373711/serviceAccounts/sa-composer@gb-poc-373711.iam.gserviceaccount.com"
}

# google_service_account_iam_member.admin_account_iam["sa-dataflow"] will be created
+ resource "google_service_account_iam_member" "admin_account_iam" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = "user:mazlum.tosun@gmail.com"
+ role = "roles/iam.serviceAccountAdmin"
+ service_account_id = "projects/gb-poc-373711/serviceAccounts/sa-dataflow@gb-poc-373711.iam.gserviceaccount.com"
}

Plan: 8 to add, 0 to change, 0 to destroy.

Changes to Outputs:
+ services_account = {
+ sa-composer = {
+ account_id = "sa-composer"
+ description = null
+ disabled = false
+ display_name = "SA for Composer"
+ email = (known after apply)
+ id = (known after apply)
+ member = (known after apply)
+ name = (known after apply)
+ project = "gb-poc-373711"
+ timeouts = null
+ unique_id = (known after apply)
}
+ sa-dataflow = {
+ account_id = "sa-dataflow"
+ description = null
+ disabled = false
+ display_name = "SA for Dataflow"
+ email = (known after apply)
+ id = (known after apply)
+ member = (known after apply)
+ name = (known after apply)
+ project = "gb-poc-373711"
+ timeouts = null
+ unique_id = (known after apply)
}
}

We then launch the apply trigger.

The Terraform remote state in the bucket contains the 2 created modules with default.tfstate inside :

  • In Google Cloud roles page, we can see the created Custom Roles :
  • The created Service Accounts :
  • In the IAM page, the roles assignation to service accounts :

At the end we launch the destroy trigger, to remove all the infrastructure created for this article.

Conclusion

This article showed how to build a modular infra with a separation of concern in the Terraform code. This management is facilitated by Terragrunt which allows execute plan/apply/destroy on multiple modules and to avoid repeating code with the concept of DRY.

Cloud Build orchestrates everything with a serverless approach by synchronizing on a Github repository. This CI CD tool is lightweight and can easily manage a Terraform infrastructure with manual jobs. We prefer a manual approach in order to securely control the infra logs via the plan before launching the apply.

All the code shared on this article is accessible from my Github repository :

If you like my articles and want to see my posts, follow me on :

- Medium
- Twitter
- LinkedIn

--

--

Mazlum Tosun
Google Cloud - Community

GDE Cloud | Head of Data & Cloud GroupBees | Data | Serverless | IAC | Devops | FP