My Cloud Workstation productivity setup

Daniel Strebel
Google Cloud - Community
8 min readJul 31, 2023

I have been tinkering with Cloud Workstations in Google Cloud for a while and I started to really embrace it for many of my coding and infrastructure engineering projects. During this process I have applied a number of customizations that could be interesting for other members of the community that are looking at moving their primary developer environments into the cloud.

If you are more interested in a broader organizational perspective, there are a number of resources that talk about the operational aspects of Cloud Workstations and how large companies (e.g. L’Oreal) are leveraging them to be more productive.

In this post I want to turn the focus on the developers and show how easy it is to tune a Cloud Workstation configuration for specific tooling needs. As a teaser, this is what my customized Cloud Workstation container looks like:

Cloud Workstation with the customizations described in this post

I use this my workstation container image as a basis for live coding, creating demo assets and working on my open source projects. The configuration of the base image is very intentional as I tend to customize it based on the programming language etc. However, there are some common configurations and tools that I have come to rely on that I decided to embed within the workstation image such that I don’t have to re-apply the configuration for every new workstation image I create.

The remainder of this post unpacks some of the details of how to customize a Cloud Workstation and a config sample that you can directly apply in your own project.

It all starts with a container image

Creating a Cloud Workstation is fairly straightforward. First you create a workstation cluster that defines the GCP region and VPC network where the workstations should be located. The templates for the workstations are created in a workstation configuration. The configuration is associated with a workstation cluster and specifies the workstation virtual machine as well as the base image that should be loaded onto them. From a developer experience perspective, another important setting in the workstation configuration is the number machines in the quickstart pool which provides the ability to pre-warm a set of workstations to significantly reduce the workstation startup time.

The easiest way to specify the workstation cluster, the configuration and finally the workstations is via the Google Cloud Console UI:

Creating a Cloud Workstation configuration on the Google Cloud console.

For a more automated, auditable and repeatable configuration I definitely recommend using the terraform resources for the cluster, configuration, and the workstation itself.

Regardless of the way the workstation is provisioned the actual workstation logic is provided in the form of a container image. Cloud workstation configurations allow you to pick from a number of first and third-party base images or the option to build your own from scratch. The default workstation image is based off of Visual Studio Code Open Source or Code-OSS, for short, with a number of customizations that make it a suitable starting point for many development tasks and specifically target developers in a Google Cloud environment like the pre-installed Cloud Code extension.

Since the workstation image is available in a public image registry you can pull it and run it locally like you would run any other image.

Disclaimer: Running the workstation image outside of Cloud Workstation is not a supported use case. It also doesn’t have the full feature set of a Cloud Workstation. We only mention it in the context of this post as an interesting case study and to take a look under the hood. If you landed here to learn how to build workstation images to run in your production environment, you can safely skip to the next section.

docker run --rm -it -p 2022:22 -p 8080:80 \
europe-west1-docker.pkg.dev/cloud-workstations-images/predefined/code-oss:latest

This will allow you to open the web view of the Code-OSS container in your local web-browser

Running the Code-OSS container locally

Alternatively, you can also use the workstation image as a remote SSH target of your local VS code editor. To do this you will have to set up a SSH connection to the exposed SSH port of the workstation container via the remote SSH extension.

Configuring an SSH connection to the locally running Code OSS container

Again, none of these modes should be assumed to work reliably; neither are they recommended for any production setups. However, running the workstation image directly offers a quick turnaround time and can come in very handy if you plan to customize your own workstation images and need to validate your changes along the way.

Customizing the Cloud Workstation Image

The default workstation image offers a simple and cheerful way to get started. It also offers developers a range of tools that they are likely already familiar with if they have used the Google cloud shell before. These pre-installed tools include the default Debian distribution and the gcloud CLI, kubectl and other handy utilities like jq.

As the Cloud Workstations are mounted with persistent volumes developers can easily take a vanilla workstation image and start customizing them with their own tooling for their own personal preferences.

In some scenarios however, the process of having every developer customize their own workstation is less practical especially when:

  • There are teams that share a set of custom tooling that every developer is expected to have in place e.g. common linter configuration.
  • Developers are using different configurations for different tasks e.g. one for frontend and one for infrastructure development.
  • Developers are using multiple workstations to be located within specific VPC networks that they need to have access to internal resources.

In these scenarios having the flexibility to customize the workstation image is a very important feature. The workstation documentation provides a great starting point for customizing these images.

Configuration at the image build vs container startup

Essentially, there are two main extension points for the base Code-OSS image that we want to explore:

  1. Customization that can be provided at the image build time.
  2. Customization that needs to be provided during the container startup

Extending the workstation configuration through extending the Dockerfile (or Containerfile) itself is obviously preferred as it only adds to the image build but not to the startup time of the container.

By extending the Dockerfile you can for example:

  • Install libraries through the package manager
  • Pre-Populate repositories
  • Configure Extensions that should be pre-installed in VS Code
  • Install other tools and binaries

However one caveat is that the user home directory isn’t available until the workstation’s startup scripts are run. For this reason any configuration that requires a user configuration will have to be done at startup with a script in the /etc/workstation-startup.d directory.

This means that the script will be needed to:

  • Set the settings.json file that defines the machine level VS Code default settings.
  • Set any shell profiles like aliases or other customization.

Example Cloud Workstation Image Build

This section provides the configuration used in the workstation container shown in the beginning of this blog post. For this we create a new folder and within it

A Dockerfile to add

  • Terraform to manage various kinds of infrastructure
  • ZSH as an alternative shell with oy-my-zsh to simplify the management and the powerlevel10k theme
  • k9s for navigating Kubernetes clusters without remembering all the kubectl syntax
  • IDE extensions from the Open VSX Registry
  • a runtime config script as described below

with the following content:

FROM europe-west1-docker.pkg.dev/cloud-workstations-images/predefined/code-oss:latest

RUN wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg && \
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list
RUN sudo apt update && sudo apt install -y zsh gnupg software-properties-common terraform
RUN apt-get clean

# Install zsh
ENV ZSH=/opt/workstation/oh-my-zsh
RUN sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" "" --unattended && \
git clone https://github.com/zsh-users/zsh-autosuggestions /opt/workstation/oh-my-zsh/plugins/zsh-autosuggestions && \
git clone --depth=1 https://github.com/romkatv/powerlevel10k.git /opt/workstation/oh-my-zsh/custom/themes/powerlevel10k

# Install k9s
RUN curl -s https://api.github.com/repos/derailed/k9s/releases/latest \
| grep "browser_download_url.*Linux_amd64.tar.gz" \
| cut -d : -f 2,3 \
| tr -d \" \
| wget -qi - && mkdir -p /opt/workstation/bin && tar -xf k9s_Linux_amd64.tar.gz -C /opt/workstation/bin

# Install extensions
RUN wget -O terraform.vsix $(curl -q https://open-vsx.org/api/hashicorp/terraform/linux-x64 | jq -r '.files.download') \
&& unzip terraform.vsix "extension/*" \
&& mv extension /opt/code-oss/extensions/terraform

RUN wget -O vscode-icons.vsix $(curl -q https://open-vsx.org/api/vscode-icons-team/vscode-icons | jq -r '.files.download') \
&& unzip vscode-icons.vsix "extension/*" \
&& mv extension /opt/code-oss/extensions/vscode-icons

# Copy workstation customization script
COPY workstation-customization.sh /etc/workstation-startup.d/300_workstation-customization.sh
RUN chmod +x /etc/workstation-startup.d/300_workstation-customization.sh

and a startup script called workstation-customization.sh to

  • Set the VS Code machine level configuration (can be overridden at the user or workspace level)
  • Initialize the ZSH configuration
  • Set some handy aliases

with the content of:

#!/bin/bash

CODEOSS_PATH="/home/user/.codeoss-cloudworkstations"
SETTINGS_PATH="$CODEOSS_PATH/data/Machine"

mkdir -p $SETTINGS_PATH
cat << EOF > $SETTINGS_PATH/settings.json
{
"workbench.colorTheme": "Default Dark+",
"terminal.integrated.defaultProfile.linux": "zsh"
}
EOF

chown -R user:user $CODEOSS_PATH
chmod -R 755 $CODEOSS_PATH

export ZSH=/opt/workstation/oh-my-zsh

if [ -f "/home/user/.zshrc" ]; then
echo "ZSH already configured"
else

cat << 'EOF' > /home/user/.zshrc
export PATH="$PATH:/opt/workstation/bin"

export ZSH=/opt/workstation/oh-my-zsh
export ZSH_THEME="powerlevel10k/powerlevel10k"
export POWERLEVEL9K_DISABLE_CONFIGURATION_WIZARD=True

plugins=(
git
zsh-autosuggestions
kubectl
)

alias tf='terraform'
alias kc='kubectl'
alias code='code-oss-cloud-workstations'

source "$ZSH/oh-my-zsh.sh"
EOF
chsh -s $(which zsh) user
fi

zsh -c "source $ZSH/oh-my-zsh.sh"

chown -R user:user /home/user
chown -R user:user /opt/workstation
chmod -R 755 /opt/workstation

Feel free to adjust your extensions, tools and other configuration as you see fit. Once you are happy with the configuration you can run the following that executes a Cloud Build job and pushes the image to an artifact registry repository. Going forward you probably want to establish a more robust publishing process that involves source code management, peer reviews of the changes, vulnerability scanning and automated build triggers.

PROJECT_ID=<my project id>
gcloud services enable artifactregistry.googleapis.com cloudbuild.googleapis.com --project $PROJECT_ID

gcloud artifacts repositories create default --repository-format=docker \
--location=europe-west1 --project $PROJECT_ID

gcloud builds submit . \
--tag=europe-west1-docker.pkg.dev/$PROJECT_ID/default/my-workstation \
--project $PROJECT_ID

Once the build completes you can use the image that was added to your artifact registry in the Cloud Workstation configuration and start using it for any development tasks going forward:

Using the custom workstation image in a Cloud Workstation configuration

Day Two Operations

It is recommended that you run scheduled periodic rebuilds of your customized Cloud Workstation image such that the latest version of the upstream Code-OSS image, security patches, and new features are automatically included in your image. Cloud Build triggers and the Cloud Scheduler allow for an easy way to automate e.g. weekly rebuilds as described in more detail in the Cloud Build documentation.

For additional security it is also recommended to enable artifact analysis on the container registry which will allow you to further learn about potential CVEs as they come up in the future.

Further Information and Links

--

--