Secure Ways to authenticate to Google Cloud Platform

Prasoonprit
Google Cloud - Community
6 min readSep 30, 2024

TLDR: As organizations increasingly rely on cloud services like GCP, understanding how to secure these environments becomes essential. Authentication is the first line of defense in protecting your cloud resources. By learning secure ways to authenticate to GCP, you can prevent unauthorized access, protect sensitive data, and comply with security regulations. This blog will help you navigate the complex landscape of cloud security and provide you with actionable insights to enhance your GCP authentication processes.

Authentication is the process by which your identity is confirmed through the use of some kind of credential. Authentication is about proving that you are who you say you are.

Authentication use cases:

Setting up authentication can be tricky. You’ve got a variety of use cases as I have mentioned some of them below:

In security, the three “A”s of controlling access are Authentication (Who is the user?), Authorization (What is the user allowed to do?), and Auditing (What are they doing?).

In Google Cloud, Cloud Identity performs authentication. Cloud Identity is the identity provider (IdP) for Google Cloud. It also is the Identity-as-a-Service (IDaaS) solution that powers Google Workspace. It stores and manages digital identities for Google Cloud users. Setting up Cloud Identity is a prerequisite to onboarding your organization onto Google Cloud.

Authentication options

Aside from username and password, there are two frequently used authentication options:

  • 2-Step Verification (2SV) with Google authentication
  • SSO authentication with a third-party identity provider
Basic difference between Authentication and Authorization
Basic Difference between Authentication and Authorization

Now, I won’t go into much detail about how to set up authentication. Rather in this blog, I will be talking about different ways using which you could authenticate to Google cloud resources specifically the local development/authentication part because that’s really when one majorly compromises the security related to authentication and jumps directly to the usage of service account keys etc which are really an unsecure way.

For Local Development:

As a cloud engineer, I am a big fan of using command line tools on a black screen rather than checking the stuff manually on the console. Hence, gcloud command line utility becomes the saviour.

You could use your user credentials to sign in to gcloud CLI:

gcloud init
gcloud auth login

You could use service account impersonation ( Make sure Service Account Credentials API is enabled in the given project and you must have the required role ). For a specific gcloud command, use the flag — impersonate-service-account=<sa-name>@project.iam.gserviceaccount.com with any command. For example:

gcloud projects list - impersonate-service-account=<sa-name>@project.iam.gserviceaccount.com

To persist this setting use the property auth/impersonate_service_account:

gcloud config set auth/impersonate_service_account <sa-name>@project.iam.gserviceaccount.com

To unset use:

gcloud config unset auth/impersonate_service_account

You could also use gcloud auth application-default login:

There is a difference between auth login and application-default login.

gcloud auth login:

  • Authenticates your Google Cloud CLI commands as a user.
  • Credentials are stored for CLI use only and are tied to the user’s account.
  • Typically used for managing GCP resources manually.

gcloud auth application-default login:

  • Authenticates application code using Google Cloud client libraries.
  • Credentials are stored in a standard location for Application Default Credentials (ADC).
  • Typically used for testing or running code that interacts with Google Cloud APIs.

For Local Python Development:

First setup the Application Default Credentials ( ADC ) and install the corresponding client library, then you could use the corresponding client library because the client library automatically checks for and uses the credentials you have provided to ADC to authenticate to the APIs your code uses. Your application does not need to explicitly authenticate or manage tokens.

You could also set up ADC for client libraries using service account impersonation

gcloud auth application-default login - impersonate_service_account <sa-name>@project.iam.gserviceaccount.com

Client libraries that support impersonation can use those credentials automatically.

from google.cloud import storage
client = storage.Client()
buckets = list(client.list_buckets())
print(buckets)

For Docker Local Development:

As with local development, ADC allows your containerized applications to automatically authenticate with Google Cloud services without embedding sensitive credentials.

First, authenticate on your local machine using:

gcloud auth application-default login

Mount the ADC credentials inside the Docker container by sharing the ~/.config/gcloud/application_default_credentials.json file and point the variable GOOGLE_APPLICATION_CREDENTIALS to it. On a linux based system, the command looks as below:

docker run -v "$HOME/.config/gcloud/application_default_credentials.json":/gcp/creds.json:ro - env GOOGLE_APPLICATION_CREDENTIALS=/gcp/creds.json <image_name>

You could also volume map your entire gcloud directory inside the container as below which might let other tools in the container access gcloud settings (beyond just the application_default_credentials.json file). This may expose more configuration than necessary. This is ideal when you want your Docker container to fully replicate the local development environment with gcloud configuration (e.g., for complex local simulations involving multiple Google Cloud tools).

docker run -v ~/.config/gcloud:/root/.config/gcloud -e GOOGLE_APPLICATION_CREDENTIALS="/root/.config/gcloud/application_default_credentials.json" <image_name>

Running applications using your user credentials can be fine for development, but is not something you should do in production.

You can of course also use a service account file.

Another use case is authenticating docker to Google’s Artifact Registry. For that, we most of the time tend to simply use gcloud auth login and gcloud auth configure-docker command ( it places your access and refresh tokens in your home directory. Any user with access to your file system can use those credentials ). It will be a long lived credential and hence not the most secure option. Hence, I prefer to use an access token ( valid for 60 minutes and generated token will be used as a password for docker login ) and works best for automation workflow or CI/CD.

gcloud auth print-access-token --impersonate-service-account ACCOUNT | docker login -u oauth2accesstoken --password-stdin https://LOCATION-docker.pkg.dev 

For Terraform Local Development:

As described in Terraform provider reference Terraform supports the use of access tokens, service account impersonation and ADC as well.

I always use ADC for my local terraform runs and this way I avoid specifying a credentials file explicitly.

gcloud auth application-default login


provider "google" {
project = "your-project-id"
region = "your-region"
}

Apart from above:

To use an access token with impersonation run:

export GOOGLE_OAUTH_ACCESS_TOKEN=$(gcloud auth - project=$PROJECT print-access-token - impersonate-service-account=<sa-name>@project.iam.gserviceaccount.com)

Please note that the access token has a limited lifetime and is not automatically refreshed.

To make direct use of service account impersonation use:

export GOOGLE_IMPERSONATE_SERVICE_ACCOUNT=<sa-name>@project.iam.gserviceaccount.com

For Rest API Authentication:

For local development, use your gcloud credentials to authenticate. However, Use ADC for production environments. Below are the commands using which you could achieve it.

curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" "https://compute.googleapis.com/compute/v1/projects/{project}/global/networks/{network}"

curl -X GET -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" "https://compute.googleapis.com/compute/v1/projects/{project}/global/networks/{network}"

For GKE Enterprise Clusters:

GKE is moving towards PSC based architecture and it is default for all the new clusters. In order to connect to the GKE Enterprise cluster, use the connect gateway.

A little fresher:

Fleets in Google Cloud are logical groups of Kubernetes clusters and other resources that can be managed together, created by registering clusters to Google Cloud. The Connect gateway builds on the power of fleets to let GKE Enterprise users connect to and run commands against fleet member clusters in a simple, consistent, and secured way, whether the clusters are on Google Cloud, other public clouds, or on premises, and makes it easier to automate DevOps processes across all your clusters.

This is similar to running gcloud container clusters get-credentials using your Google Cloud account.

gcloud container fleet memberships list
gcloud container fleet memberships get-credentials membership-name
kubectl get pods

Summary: Choosing the right authentication methods sometimes becomes a tough decision as there are so many different options available out there specially for local development and hence, this blog post will guide you through some of the secure ways to achieve that.

That’s it for this post. You can find me on Linkedin to stay connected.

--

--

Google Cloud - Community
Google Cloud - Community

Published in Google Cloud - Community

A collection of technical articles and blogs published or curated by Google Cloud Developer Advocates. The views expressed are those of the authors and don't necessarily reflect those of Google.

Prasoonprit
Prasoonprit