Google Cloud Platform for AWS Experts

An intro to Google Cloud for AWS experts

George Mao
Google Cloud - Community

--

After over a decade of working in the AWS ecosystem, I joined Google in April 2024. It’s been an awesome journey learning an entirely new cloud platform and after 6 months working with Google Cloud I’ve realized there are many similarities with AWS and a few fundamental differences. I’ll summarize my tech onboarding — hopefully this helps you!

Your general AWS cloud knowledge is highly transferable

All of the knowledge you learned with AWS around scalability, elasticity, and self healing systems are applicable to GCP. For example, the core compute scaling factor with AWS is an EC2 Autoscaling group (ASG). The GCP equivalent is a Managed Instance Group (MIG). Nearly everything you know about ASGs are applicable to MIGs.

Regions, VPCs, Subnets, and the majority of Cloud concepts are the same between AWS and GCP. There’s even a detailed mapping of AWS vs GCP services here.

This knowledge probably got me more than 50% of the way to passing the Pro Architect, Pro Developer, and Pro Cloud Database Engineer exams.

But … there are a few fundamental differences that you should know. Let’s dive into those below.

GCP Networking is more powerful (and complex)

GCP VPCs are global by default

This is the most impactful difference you should know. AWS VPCs are regional only, which makes global networking difficult. There are only a few AWS services that have any global capabilities built in at all (ie. DynamoDB, S3, IAM). This means if you want to build multi-region architectures and route any traffic between regions, you need additional effort such as VPC peering, Transit Gateway, VPN, etc.

In GCP, a VPC spans all available regions (unless you configure it not to).

A single VPC can contain Zones/subnets that deploy to any region you want. Traffic can route anywhere within the VPC, without any additional network constructs. This allows you to create global applications and also lets GCP provide natively global services (ie. Spanner, BigTable, Global Load Balancer).

A global, mult-region architecture within a single VPC

GCP has Network Tags so you can selectively apply network policies

Tags in AWS are generally used for permissions or billing metadata. GCP takes tags one step further and lets you create Network configurations and assign them tags. You can apply Network tags to resources and those resources will inherit the Network config just by having tags on the resource.

For example, I created a firewall rule that allows ingress traffic and assign it a tag: iap-proxy.Next, I assign that Network tag to a GCE instance. This instance will automatically inherit the firewall rules. I can selectively apply this tag to instances, even in the same subnet.

Network Tag applied to a GCE Instance

GCP usually doesn't issue service level DNS entries

Many AWS services issue a service level DNS cname entry that points to the actual IP of the resource. For example, if you use RDS or a Load balancer, you’ll be issued a DNS entry like: mydb.123456789012.us-east-1.rds.amazonaws.com or 1234567890abcdef.elb.us-east-2.amazonaws.com. AWS manages these cnames for you and changes the backend targets during fail over or maintenance.

In general, GCP issues static IPs that remain with your resource for its life.

The IPs themselves are moved to target the correct backends during any service level event. You’ll see this behavior with Cloud SQL or Cloud Load balancing resources.

Containers power GCP serverless backends

Cloud Run is the foundation

GCP’s Cloud Run is most similar to AWS Fargate. Both of them deploy container images, manage, and scale the entire backend hosting infrastructure on your behalf. However, AWS serverless services (Fargate/Lambda) operate independently from each other and don't share any underlying constructs, while GCP uses Cloud Run as the common foundation.

Cloud Run is a building block for other serverless services

GCP uses Cloud Run as the core building block for multiple other serverless offerings. Cloud Run Jobs and Cloud Run Functions both deploy to and execute on the Cloud Run service. This means it’s easy for GCP to add features in one spot and all other services get immediate access to the features. For example, Run added GPU support and Run Functions got immediate access.

This is super powerful as GCP is the only cloud provider to have a serverless offering that provides multi-region, scale to zero, and pay as you go GPU capabilities (as of the date of this post anyway).

Cloud Run Functions run inside Cloud Run containers

AWS is well known for building its own custom framework called Firecracker to provide lightweight virtualization and security for AWS Lambda. Firecracker allows Lambda to cold start in blazing fast speeds, sometimes in just a few milliseconds. However, Lambda does not run Containers and does not share a common foundation with Fargate. This means each service has to add features independently — doubling engineering effort. In GCP, all Run based services get immediate access to new features when they are added to Run.

Check a box and I can get a NVIDIA L4 GPU that scales up in under a minute!

Finally, Security

Temporary Permissions

AWS utilizes IAM Roles to provide short term credentials to users, applications, and native AWS services. Policies attach to Roles to provide the Role with permissions. Principals perform an Assume-Role API call to obtain credentials for the target Role. These principals can be users, other roles, or even AWS services.

GCP uses Service Accounts(SA) to provide credentials. GCP Roles bind to service accounts to provide access permissions. You can Impersonate a SA to obtain temporary credentials for short term use. Managed services (ie. Cloud Run) use the attached SA for all actions that require authorization.

Credentials

AWS utilizes a Credential Provider Chain that searches various locations in order to obtain credentials when an SDK makes an API call. You can refresh your temporary credentials by calling the sts service. This provides you an Access Key, Secret Key, and Session token that you store at ~/.aws/credentials.

aws sts assume-role --role-arn arn:aws:iam::123456789012:role/role-name --role-session-name "RoleSession1" 

The AWS SDK / CLI use these credentials during the credential lookup process.

In GCP, this same concept is called the Application Default Credentials (ADC). ADC searches your system in the following order:

  1. GOOGLE_APPLICATION_CREDENTIALS environment variable
  2. User credentials set up by using the Google Cloud CLI
  3. The attached service account, returned by the metadata server

Anytime you use the gcloud CLI or Google Client SDKs they search for credentials in that order. You can refresh your credentials by executing:

gcloud auth application-default login

This auto refreshes the credentials file at: $HOME/.config/gcloud/application_default_credentials.json

Summary

There are a few other things that you should check out out as they helped me clear the Architect, Developer, and Database Engineer exams:

  • Google Cloud SQL operates its resources in GCP owned VPCs. AWS operates RDS resources in customer owned VPCs. This has impact to private routing that I’ll cover in a future post.
  • GCP Projects are equivalent to AWS Accounts. These provide the basis and logical grouping for all resources inside the Project.
  • Google Identity Aware Proxy (IAP) is a unique feature that does not natively exist in AWS. It allows you to connect to private resources using context awareness rather than tunneling through VPN or using bastion hosts. This can be web users via HTTPS or SSH/RDP users via TCP forwarding (all without public IPs). Just grant either the IAP-secured Web App User role or IAP Tunnel Resource Accessor role to provide permissions.

Reach out to me if I can help you with anything related to GCP :)

--

--

Google Cloud - Community
Google Cloud - Community

Published in Google Cloud - Community

A collection of technical articles and blogs published or curated by Google Cloud Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

George Mao
George Mao

Written by George Mao

Head of Specialist Architects @ Google Cloud. I lead a team of experts responsible for helping customers solve their toughest challenges and adopt GCP at scale

Responses (1)