Unleashing Developer Potential with Google Cloud Workstation.

Setting up a custom development environment with cloud workstation with Terraform — Part2

Akhilesh Mishra
KPMG UK Engineering
5 min readMay 18, 2023

--

Photo by Jessy Smith on Unsplash

Having your development IDE in the cloud where your applications run makes much more sense for speed, security, and flexibility.

It enables faster onboarding for developers along with standardising workstations, high performance and scalability. It enables developers to focus on innovation instead of getting bothered about issues on their laptop.

There is no greater investment than equipping your soldiers with the best tools and resources to win the battles they face.

I have already talked in detail about the advantages of having a remote development environment running on the cloud in my last blog — Why you should switch to remote development with google cloud workstations?

In this blog, I will deploy a fully customized workstation for a team of developers using google cloud workstation and Terraform.

Tools which I will use —

  • Docker to build the custom image for the workstation.
  • Google Artifact Registry to store the image.
  • Terraform for IAC.
  • Code-OSS as IDE for workstation image.

Prerequisite: I assume that you have vpc, subnet, and firewall rules created. You can follow my previous blog for this setup

resource "google_project_service" "api" {
for_each = toset([
"workstations.googleapis.com",
"artifactregistry.googleapis.com",
])
service = each.value
disable_on_destroy = false
}

we will create service accounts that will be used by the workstation to pull the container image from Artifact Registry.

# To pull image for workstation
resource "google_service_account" "image-pull" {
project = var.project_id
account_id = "image-pull"
display_name = "Service Account - container image pull"
}

resource "google_project_iam_member" "workstation-image" {
project = var.project_id
role = "roles/artifactregistry.reader"
member = "serviceAccount:${google_service_account.image-pull.email}"
}

Before we create a workstation, we need to create an artifact registry, build a custom image and push the image to the artifact registry.

artifact.tf

resource "google_artifact_registry_repository" "workstation-repo" {
project = var.project_id
location = var.region
repository_id = var.artifact_repo
format = "DOCKER"
}

You can build container image in many ways
- Build image on local machine and push it manually to artifact.
- use docker provider for terraform to build image on run time.
- Build it in your CI workflow and push it to artifact registry.

I will be manually building custom image and pushing it to repository before i deploy the workstation for ease.

you can run terraform init, terraform plan, and terraform apply on your local machine by impersonating a service account with project level owner role.

Although this is not recommended in production environments, always deploy using CI workflow and follow security best practices.

# creating service account 
gcloud iam service-accounts create terraform-deploy \
--display-name="terraform-deploy-svc"

# adding owner role on project level
gcloud projects add-iam-policy-binding someproject \
--member="serviceAccount:terraform-deploy@someproject.iam.gserviceaccount.com" \
--role="Owner"

#create the key for
gcloud iam service-accounts keys create terraform-deploy.json --iam-account \
terraform-deploy@someproject.iam.gserviceaccount.com

#impersonating the service account.
gcloud auth activate-service-account —key-file=rterraform-deploy.json

#terraform init
# terraform plan
#terraform apply

Now that we have done the boring part, let's come to the fun part by building your custom image and spinning up the custom workstation.

Dockerfile for custom worksation image

we will use the predefined image for Code-OSS as a base image and configure our custom image as per our need — in this case, we will configure it for Java development.

FROM us-central1-docker.pkg.dev/cloud-workstations-images/predefined/intellij-ultimate:latest

# Install essential packages
RUN apt-get update && apt-get install -y \
curl \
wget \
gnupg2 \
software-properties-common \
unzip

# Install Java 1.8
RUN apt-get install -y openjdk-8-jdk

# Install Git
RUN apt-get install -y git

# Install Node.js 14.17.4 and npm 6.14.14
RUN curl -fsSL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
RUN npm install -g npm@6.14.14

# Install PostgreSQL (psql)
RUN apt-get install -y postgresql-client

# Install Kubernetes CLI (kubectl)
RUN curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl && \
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl && \
rm kubectl

# Install Helm
RUN curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 && \
chmod 700 get_helm.sh && \
./get_helm.sh && \
rm get_helm.sh

# Configuration configuration - if you want any custom script to run, keep it
# under /etc/workstation-startup.d/ and it will run will deploying

COPY custom-setup.sh /etc/workstation-startup.d/

Let's build the image, tag it and push it.

# image path will look like below
# #${var.region}-docker.pkg.dev/${var.project_id}/${var.artifact_repo}
# someregion-docker.pkg.dev/someproject/somerepo/

#Docker image build
docker build -t image

#tagging image
docker tag image someregion-docker.pkg.dev/someproject/somerepo/workstation-image:1.0

# authenticate with artifact repo
gcloud auth configure-docker someregion-docker.pkg.dev

#docker push
docker push someregion-docker.pkg.dev/someproject/somerepo/workstation-image:1.0

Now we have the custom image stored in the artifact and we will use this to deploy workstations.

Lets say you have 4 developers who will be using the workstations, you will need 4 workstations assigned to individual developers. each one will have access to only their workstation. Lets use the terraform locals to store their name and emails, and will use those while creating workstations.

Let's deploy the workstation using a custom image.

  • workstation cluster, workstation config, and workstation with IAM policies

workstation.tf

locals {
network_id = "projects/${var.project_id}/global/networks/${var.vpc_name}"
subnet_id = "projects/${var.project_id}/regions/${var.region}/subnetworks/${var.subnetwork_name}"
developers_email = [
"dev1@company.com",
"dev2@company.com",
"dev3@company.com",
"dev4@company.com",
]

developers_name = [
"dev1",
"dev2",
"dev3",
"dev4",
]
}

# Creating workstation cluster
resource "google_workstations_workstation_cluster" "default" {
provider = google-beta
project = var.project_id
workstation_cluster_id = "workstation-terraform"
network = local.network_id
subnetwork = local.subnet_id
location = var.region
}

# Creating workstation config
resource "google_workstations_workstation_config" "default" {
provider = google-beta
workstation_config_id = "workstation-config"
workstation_cluster_id = google_workstations_workstation_cluster.default.workstation_cluster_id
location = var.region
project = var.project_id

host {
gce_instance {
machine_type = "e2-standard-4"
boot_disk_size_gb = 50
disable_public_ip_addresses = false
service_account = google_service_account.image-pull.email
}
}

container {
image = "someregion-docker.pkg.dev/someproject/somerepo/workstation-image:1.0"
working_dir = "/home"
}

persistent_directories {
mount_path = "/home"
gce_pd {
size_gb = 200
disk_type = "pd-ssd"
reclaim_policy = "DELETE"
}
}
}

#worksation creation
resource "google_workstations_workstation" "default" {
provider = google-beta
count = length(local.developers_email)
workstation_id = "workstation-${local.developers_name[count.index]}"
workstation_config_id = google_workstations_workstation_config.default.workstation_config_id
workstation_cluster_id = google_workstations_workstation_cluster.default.workstation_cluster_id
location = var.region
project = var.project_id

}

#iam permissions to access workstation i.e workstations.user
resource "google_workstations_workstation_iam_member" "member" {
count = length(local.developers_email)
provider = google-beta
project = var.project_id
location = var.region
workstation_cluster_id = google_workstations_workstation_cluster.default.workstation_cluster_id
workstation_config_id = google_workstations_workstation_config.default.workstation_config_id
workstation_id = "workstation-${local.developers_name[count.index]}"
role = "roles/workstations.user"
member = "user:${local.developers_email[count.index]}"
}

Voila, we are done.

What can we do better?

If you disable public IP addresses, you must set up Private Google Access or Cloud NAT

  • disable root privileges for anyone using the workstation, set the CLOUD_WORKSTATIONS_CONFIG_DISABLE_SUDO environment variable to true
  • Use Shielded VM, Confidential VM options
  • If you have compliance requirements, use customer-managed encryption

Read the official docs for more on workstations.

Disclaimer: This post is originally published on my blog livingdevops.com

Thanks for reading. Follow me for more content on google cloud, terraform and python.

Find me in Linkedin — Akhilesh Mishra

--

--

Akhilesh Mishra
KPMG UK Engineering

DevOps engineer with expertise in multi-cloud, and various DevOps tools.