Google Cloud Technology Nuggets — February 1–15, 2023 Edition

Romin Irani
Google Cloud - Community
7 min readFeb 20


Welcome to the February 1–15, 2023 edition of Google Cloud Technology Nuggets.

Please do not hesitate to give feedback on this edition and to share the subscription form with your peers.

Innovators Plus Subscription

The Innovators Plus annual subscription is now available for $299/year. This subscription gives you the following benefits:

  • Entire catalog of Google Cloud Skills Boost on-demand training
  • Up to $1,000 Google Cloud credits
  • A certification voucher
  • Special access to Google Cloud experts and execs
  • Live learning events

See the blog post for details on how Innovators have used the Google Cloud credits to help them prepare for certification, build interesting projects and more.

Google Cloud Next ‘23

A gentle reminder if you have missed this news, that Google Cloud Next will be held on Aug 29–31, 2023 and is back as an in-person event in San Francisco. Sign up here for information on Next ‘23.

API Management

This edition is loaded with posts on API Management. First up is a multi-part series on API Management policies. The first part deals with the concepts of policies and how they can be used to transform the messages flowing through the APIs. Part 2 in the series looks at policies to help secure, manage traffic, and implement custom behavior into the APIs.

APIs are key to hybrid cloud deployments. How do organizations manage large scale hybrid API Management? Check out this two part series: Part 1 and Part 2.

Kubernetes and GKE

If you are planning to test out your GKE application for scalability, you need to address several areas that include costs, capacity and the entire planning around it. Google Cloud has created a set of best practices for scalability based on their experience in handling large installations. The blog post goes into detail on how you can plan your task of doing scalability testing of your GKE application. There is also a companion guide that you can check out titled GKE scalability best practices.

DevOps and SRE

If you have been working with Kubernetes, chances are high that you are using Prometheus for your monitoring needs. But what if you had services that were running in Kubernetes and also on VMs and wanted to unify monitoring all of these services in Prometheus. While the Kubernetes world is straightforward, it is challenging to do that in the VM world and you need to get service discovery correct. On Google Cloud, this is now easier by supporting Prometheus metrics within Ops Agent. Ops Agent, as you know is Google Cloud’s mechanism via which you can have this agent installed on VMs and they will relay the metrics into Google Cloud Monitoring. With support for Prometheus Metrics, this becomes easier and you can now unify your metrics across services in VMs and Kubernetes. Check out the blog post that contains a video of this feature in action.

Looking to identify specific text in your logs and convert that into metrics. Google Cloud Logging supports that via Log-based metrics, which allows you to derive metric data from the content of your log entries. Check out the working guide to Log-based metrics via a sample application.

Storage, Databases and Data Analytics

Connecting securely to a database is much harder than it looks. It’s not just about the connection string format and the drivers but when security comes into play, you have to take into consideration multiple things that include provisioning/managing SSL Certificates, generating usernames/passwords, rotating passwords and more. Cloud SQL Connectors are meant to change that and they are available in both library and binary form to go with your application. The Cloud SQL connectors are available for Java, Python and Go. See the blog post for more details on why use the connector at all?

Have you ever been in a situation where you had to move your Google Cloud Storage from a multi-region storage to a regional storage? Cost and performance benefits would have necessitated that move and maybe at the start, we just went with the defaults? Once this is set, it is not possible to move to a regional storage, unless you create the buckets and move the data back into it. This blog post discusses how you can use the Storage Transfer Service (STS) to do this and the planning required for the same.

A Data Pipeline architecture requires that you take into consideration several factors like data formats, transformation tools, capture data changes, choosing between ELT/ETL, ETLT, etc. There are multiple design patterns when it comes to Data Pipeline and this blog post goes into several of them.

PITR (Point in Time Recovery) for Cloud SQL for PostgreSQL got a new feature that makes the decision easier to enable it. The write-ahead logs being stored for PITR operations will not consume disk storage space. Instead the transaction logs collected during the retention window will be stored in Google Cloud Storage, and will be retrieved during a restore. These logs will be stored for up to seven days in the same Google Cloud region as your instance at no additional cost to you. Check out the blog post for details.

Want to do away with password authentication to your database in your application workflows? What if you could integrate Cloud IAM itself into the Authentication flow for your Cloud SQL databases? Cloud IAM Database Authentication does exactly that. It allows mapping preexisting Cloud IAM principals (users or service accounts) to database native roles. Check out the blog post for more details.

If you use Looker and Google Sheets in your organization, you can now use Connected Sheets for Looker to integrate with 50+ databases supported by Looker. Looker views, dimensions, and measures are presented through a pivot table. Check the post for integration details.

BigQuery BI Engine is the fast, in-memory analysis system for BigQuery. If you are interested in understanding how BI Engine works at depth, check out the post titled DeMystifying BigQuery BI Engine.

Machine Learning

Vertex AI consists of 20+ services. Setting up these services with best practices in mind and in a way that it is set for your future workloads is a complex task. It is important that you can refer to Cloud foundational blueprints with Vertex AI services to help you in this task. Check out this blog post that covers 5 core services: Vertex AI Workbench, Vertex AI Feature Store, Vertex AI Training, Vertex AI Prediction and Vertex AI Pipelines and the related architectures to get these foundations in place.


Looking to utilize IPv6 capabilities on Google Cloud? Check out the 2nd part of the series that helps you understand how to use Google Cloud’s GUA (Global Unicast Addresses) and ULA (Unique Local Addresses) address space within your VPC network and customize it for your environment.

Grow Series for Startups

We’ve covered the Start and Build series for Startups in earlier editions of this newsletter. We have the next part i.e. Grow series on how to grow and scale your startup. This series will focus on scaling, designing sustainable deployments and exploring industry specific use cases. The first episode in the series is out. Check out the blog post for more details.

Stay in Touch

Have questions, comments, or other feedback on this newsletter? Please send Feedback.

Looking to keep a tab on new Google Cloud product announcements? We have a handy page that you should bookmark → What’s new with Google Cloud.