How Google Cloud Workstations saved my demo… (and my bacon)

Stuart Barnett
Appsbroker CTS Google Cloud Tech Blog
10 min readJul 17, 2023

Google’s cloud based developer environments are a winner for road warrior devs — as well as those who employ them.

Sometimes, the unexpected happens…

For various reasons, I’ve found myself working away from my office quite a bit recently. For most things I need to do that’s fine — I have a small MacBook Pro provided by my employers (i.e. Google Cloud Premier Partner CTS); it’s light, has great battery life, a decent (if small) screen, and does most things I need brilliantly. At least, until it comes to local Kubernetes (k8s) app development. You see, firing up Rancher (for Docker), then minikube (my preferred local k8s distro), then firing up VS Code (with Cloud Code plug-ins), building some maven-based Spring Boot java apps to package up, containerise and run as pods… that pretty much maxes out my machine — but with power in, an external monitor and some decent wi-fi, it works nicely enough for what I need.

Trying all of that on the move though… well, let’s just say the fan is working overtime. The heat generated means I have to be careful how I sit, and the battery life impact has led to some thrilling “will it/wont it” moments on some calls with clients. Oh — and good luck pulling down/pushing up all of those maven artifacts and docker base images on spotty wi-fi. Still — it’s just about do-able — though without my reasonably specc’d MacBook, it would be nigh on impossible.

Photo by Efe Kurnaz on Unsplash

I realised this when I found myself at my Mum’s for a few days, some 200 miles away from said device which I had forgotten to take with me. Given I had planned to be working on a client demo for later that week (subject: “Using Skaffold and Cloud Code for Accelerating the Inner Dev Loop”) , this was going to be something of a problem. A big problem. My mum does have a laptop — but it’s Windows-based, and more healthily specc’d for light website browsing than cutting edge k8s dev. She’s also not fortunate enough to have superfast broadband — so downloading tools was going to take some time, not to mention all the various base layers and containers being pulled and pushed. Oh dear… At least the base code was all committed and up-to-date in our github (old dev habits die hard). I briefly toyed with the idea of trying to install all the tools on my mum’s machine, but quickly realised that it would be futile, and I was facing an early return home. Aside from getting to spend less time with her, it also meant missing out on being spoiled by her cooking, including a much-awaited cooked breakfast ☹️. And then, the Eureka moment…

I’d had a quick play earlier in the year with Cloud Workstations, Google’s container based managed dev env offering. This enables you to spin up predefined images containing a bunch of developer tools on clusters of VMs on a pay-as-you-go basis, with some clever jiggery-pokery which means you can access any installed IDE and terminals directly through a browser. They have an image that comes pre configured with Code-OSS (i.e. open source VS Code), Cloud Code and minikube… given I could access my demo project in Google Cloud through the browser... this could be worth a shot?

Station to Station

First things first — as with all Google Cloud services, I needed to enable the Cloud Workstations API, as well as giving myself the Workstations Admin IAM.

That done, I needed a workstation configuration — this effectively defines what will be available on your workstation, what processor/memory, size of disk attached, IAM (i.e. who can create/use workstations), which region it will run in etc. I just picked a suitable spec and went with the suggested sane defaults. It also prompted me to create a workstation cluster (note this isn’t a k8s cluster — rather it’s a shared control plane for your workstations/VMs in a particular VPC or region).

Creating a Workstation Configuration

As a last step I needed to add myself as a user able to create a workstation from this configuration — then on hitting create, I saw this:

My Workstation configurations

So based on this configuration, I could create a workstation:

Creating a Workstation

Which then appeared in my list of available workstations (as a stopped workstation):

Workstations ready to goi

Then I just hit the “Launch” button, eh voila, a new tab opens in my browser — with a shiny new Code-OSS IDE!

A vanilla Cloud Workstation

Tools you can trust

Ok, so far so good — as I use VSCode and CloudCode on my local setup this all felt nicely familiar — but could I still run k8s locally on this workstation? Yep, minikube all present and correct — though I needed an additional auth step to get the gcp-auth plugin up and running for accessing any Google services. Additionally it was really simple to authenticate against the GCP project I was targeting — so all my services were visible via the Cloud Code integrations:

A quick scoot on the command line in a terminal showed the following all available:

gcloud cli, Docker, git kubectl, helm, skaffold, kpt…

…along with some of the usual language dev runtimes (java, golang, python). As I had admin on this image, I could install pretty much what I wanted — although all of what I needed could be handled by installable VSCode extensions (again, readily available). You can build and maintain your own custom images if you need bespoke tooling, but this was more than good enough for my needs.

And so off I went; merrily hacking away at java code and deployment manifests, building and deploying and debugging via Skaffold to minikube and GKE, triggering builds, editing pipeline definitions — all via this tidy little VM. As I was inside Google’s network, pulling container layers was super fast, rather than relying on my local connection. I was productive! Thus the demo was built, the code committed — all from my mum’s laptop browser, allowing me to enjoy a couple more days with her (and eat her out of house and home). Before I left I powered the Workstation down, got back to my office, fired it up again and just carried on exactly as I had before. To be honest, it was just as simple to continue to do everything through the browser as it would have been to pull the code into my local dev env — I had everything I needed. I actually used this environment to deliver the actual demo — ok, I pulled the code locally as a backup — but never felt I needed it. I was sold.

The Road Test

I used the same workstation the following week for a similar demo for a major Financial Services Institution (FSI) — ironically, FS is a sector which orgs often employ large teams of contingent workers. In my previous experience, I’ve seen offshore colleagues struggling with poorly specc’d machines, inability to access software tools because of policy restrictions, and unable to build/test locally — these teams could potentially unlock huge gains in velocity if they had access to this tooling — with their employers safe in the knowledge that they can have complete control over these environments. I know these days some orgs provision managed VMs for dev, typically accessed by something like citrix — but the beauty of this solution is that everything can remain within the perimeter of your Google Cloud project, using the same security model. You can use the same IAM, perimeter controls (like VPC-SC) and zero trust approach to protect everything in the same way you control access to your other cloud assets — and all you need to access it securely is a chrome browser.

Google Dogfood strikes again

What I hadn’t realised until having a chat with someone at Google Towers, is that this is very similar to how a lot of engineering is undertaken at Google. Code doesn't live on an engineers laptop but is only ever edited, built and executed on internal environments and a custom IDE (cider), access to which is controlled through a browser via BeyondCorp (Google’s internal zero trust solution). So it’s a very similar, secure workflow, supporting a very large, globally distributed engineering team.

Of course, Google aren’t exactly the only (or indeed weren’t anywhere near the first) kids on the block to be offering this as a service to others — think GitPod, CodeSpaces, AWS Cloud9, Microsoft Dev Box — but there are a couple of crucial differences:

  • Open source IDEs (or IntelliJ, if you prefer) and customizable base images
  • Tight integration with a local k8s instance, native development utilities and Google Cloud services and APIs with Cloud Code
  • The ability to integrate directly with Beyond Corp Enterprise, Google’s zero trust solution and cloud native IAM
  • It is an integral part of a wider Secure Software Supply Chain offering (see Software Delivery Shield)
Cloud Workstations (left) as a component of Software Delivery Shield — credit: https://cloud.google.com/software-supply-chain-security/docs/sds/overview

The fact you have control, integration and co-location of your environment within your cloud estate gives you a much more tantalising prospect for a cost-effective controlled corporate development environment that your developers actually might want to use (and that you might actually feel comfortable with them using!) The fact you can customise the images also means you can provide bespoke tooling for folks like data engineers, data analysts, SREs etc — not just for app mod types like me, although that’s where the standard offering really scores IMHO.

The price of freedom

OK, there’s always a downside — and for me this is probably it (disclaimer: I did my demo work before GA, with associated pre-GA pricing). Although the PAYG element is probably quite reasonable as a function of machine size and hours used, there’s also the overhead of a cluster fee (your workstations run in a managed “cluster” of machines) — and this is currently $144 a month per cluster ☹️ — that’s fine as an economy of scale if you’re running a large fleet of workstations for a team of developers, but it’s a bit pricey if you only want a couple of machines or so. So as with a few things Google, there’s a $$s barrier for smaller orgs. Granted, all that has to be weighed against the capital cost of supplying workstation-spec machines to your developers, depreciation, warranties etc — but it’s still steep for smaller teams. That said, we’ve seen this type of pricing scheme evolve into more flexible options over time, so let’s hope that’s the case here too — as I feel there are any number of orgs large and small who could benefit from this approach when developing for Google Cloud.

Even so, as it stands, it’s probably still worth consideration at limited scale — as a solution, it’s secure, it’s flexible, has minimal operational overheads — and it’s really rather good.

Things I learned

So what was the outcome of this little adventure in remote development and breakfast salvation:

  • I really like Cloud Workstations. Admittedly, my life revolves around Google Cloud services — but it just makes my life so much easier when I actually need to start hacking apps and services. The fact my workstation can be located in the same project or shared VPC as my target infrastructure, so I can easily authenticate and access services without ever having to venture outside of my Google Cloud estate.
  • I’m not actually sure I need a local dev env anymore — as all of my current development efforts are really targeting Google Cloud services, and the container dev experience really has been that good. Maybe even a Chromebook, one day….(but don’t tell my employers that just yet)
  • Any org with remote or distributed/offshore teams (and there’s a lot of them) could really benefit from this approach — the improvement in management and security capabilities for operators and the devX for some of those devs would be remarkable (not to mention quicker onboarding and increased velocity)
  • I’d like to see some more flexibility in the pricing model for smaller teams — to me, it seems you’d currently need a reasonable number of devs (maybe >20–30?) all running workstations to make this feel cost effective — but YMMV
  • My Mum makes the world’s best bacon sarnies (but then, I knew that already ;-) )

About CTS

CTS is the largest dedicated Google Cloud practice in Europe and one of the world’s leading Google Cloud experts, winning 2020 Google Partner of the Year Awards for both Workspace and GCP.

We offer a unique full stack Google Cloud solution for businesses, encompassing cloud migration and infrastructure modernisation. Our data practice focuses on analysis and visualisation, providing industry specific solutions for; Retail, Financial Services, Media and Entertainment.

We’re building talented teams ready to change the world using Google technologies. So if you’re passionate, curious and keen to get stuck in — take a look at our Careers Page and join us for the ride!

--

--

Stuart Barnett
Appsbroker CTS Google Cloud Tech Blog

GCP Cloud Architect Lead at CTS. Thoughts here are my own and don’t necessarily represent my employer.