Working with terraform + GCP (Part 2)

Etido Ema
4 min readJan 15, 2024

--

Welcome again to my page. Before we start, this is the link to part 1 of this write up. medium.com

Please ensure you have a look on part 1, for you to fully follow up on what i am doing here.

Step 1 : run the terraform command (terraform apply) to create a bucket and enter a value : yes. After running this, you will notice a change between the terraform.tfstate.backup file and the terraform.tfstate file. head over to your cloud storage buckets and refresh it, your buckets will be created.

Step 2: Now lets search for terraform + big query + dataset. For this demonstration lets just leave out the optional ones and copy the required ones. Then navigate to the terminal or git bash and type in (terraform fmt), you can edit the resource and the dataset id. CTRL + S and then terraform apply. Navigate to your big query studio and then confirm if you have a dataset, you probably should not at this point. Navigate back to your terminal and input yes. All are optional apart from dataset_id which is required.

lets work with this

The code we copied


resource "google_bigquery_dataset" "dataset" {
dataset_id = "example_dataset"
}

edited:

resource "google_bigquery_dataset" "ny_taxi_dataset" {
dataset_id = "ny_taxi_dataset"
}
  • CTRL + s and then navigate to the command line and run terraform fmt . This will help format your terraform code structure and then run a terraform apply and go ahead and input a yes. This will create a dataset in the big query studio
  • Navigate to your tfstate file , you will notice that resources have been added.

our ny_taxi_dataset has been created.

Step 3. Now lets create a variable.tf file. but lets destroy the resources we created on gcp. navigate to command line and type in: (terraform destroy)

  • We will be declaring variables on this file

variable.tf

# Variable.tf file

variable "project" {
description = "Project"
default = "ultimate-manutd-929392"
}

variable "region" {
description = "Region"
default = "us-central1"
}

variable "location" {
description = "Project Location"
default = "US"
}

variable "bq_dataset_name" {
description = "My BigQuery Dataset Name"
default = "ny_taxi_dataset"
}

variable "gcs_bucket_name" {
description = "My Storage Bucket Name"
default = "ultimate-aspect-410714-terra-bucket"
}

variable "google_storage_class" {
description = "Bucket Storage Class"
default = "STANDARD"

}

Main.tf

# Main.tf file

terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.51.0"
}
}
}

provider "google" {
project = var.project
region = var.region
}

resource "google_storage_bucket" "demo-bucket" {
name = var.gcs_bucket_name
location = var.location
force_destroy = true


lifecycle_rule {
condition {
age = 1
}
action {
type = "AbortIncompleteMultipartUpload"
}
}
}



resource "google_bigquery_dataset" "ny_taxi_dataset" {
dataset_id = var.bq_dataset_name
location = var.location
}
  • After all the editing and adding of variables between the main.tf and variable.tf file. lets head over to our command line and run: terraform fmt → terraform plan
  • Lets refresh our google big query. Since we destroyed the dataset , the dataset wont exist there. now lets run a terraform apply on command line. Navigate back to the GCS big query studio and refresh. the resources will become available. You can now destroy it once you are done.
  • lets head over to our terraform files to unset the credentials, because we intend to put those credentials on the terraform files it self.
  • Navigate to your command line and type in the commands in this order: echo $GOOGLE_CREDENTIALS → unset GOOGLE_CREDENTIALS →echo $GOOGLE_CREDENTIALS → terraform plan
  • At this point you will get an error, because the credentials are not longer available in our environment, now lets put those credentials to our terraform files instead.
  • now edit your variable.tf and main.tf files.
  • CTRL + S & head over to the terminal and run terraform plan → terraform apply.
  • This will now run fine..

Variable.tf

# Variable.tf file


variable "credentials" {
description = "My Credentials"
default = "./keys/my-creds.json"
}
# googe credentials

variable "project" {
description = "Project"
default = "ultimate-manutd-929392"
}

variable "region" {
description = "Region"
default = "us-central1"
}

variable "location" {
description = "Project Location"
default = "US"
}

variable "bq_dataset_name" {
description = "My BigQuery Dataset Name"
default = "ny_taxi_dataset"
}

variable "gcs_bucket_name" {
description = "My Storage Bucket Name"
default = "ultimate-aspect-410714-terra-bucket"
}

variable "google_storage_class" {
description = "Bucket Storage Class"
default = "STANDARD"

}

Main.tf

# Main.tf file

terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.51.0"
}
}
}

provider "google" {
credentials = file(var.credentials)
project = var.project
region = var.region
}

resource "google_storage_bucket" "demo-bucket" {
name = var.gcs_bucket_name
location = var.location
force_destroy = true


lifecycle_rule {
condition {
age = 1
}
action {
type = "AbortIncompleteMultipartUpload"
}
}
}



resource "google_bigquery_dataset" "ny_taxi_dataset" {
dataset_id = var.bq_dataset_name
location = var.location
}
  • You can now confirm that our bucket and big query dataset has been created on GCS
  • Once you done you can go ahead and destroy this.

If you have been with me up to this point, i believe you can now see the power and flexibility that terraform gives us.

I will be super happy if you can click on the follow button.

--

--

Etido Ema

I write about Data and it's ecosystem. Please click on the follow button and enjoy write ups about the data engineering and its ecosystem.