Service Account credentials management: how to improve your security posture

Julio Diez
Google Cloud - Community
8 min readJul 20, 2020

Background

When you want to make a call to an API to e.g. create a GCS bucket, you use your Google Cloud Platform (GCP) account to be authorized. That account is your identity and it has the format of an email address, like username@yourdomain.com. If you have the proper role/permissions to do so, your call will succeed.

Service accounts are another kind of account used by applications, not humans, to make authorized API calls. Service accounts also use an email address to identify them, following a format like this: sa-name@project-id.iam.gserviceaccount.com.

Service accounts differ from user accounts in a few ways, and specifically they don’t use passwords for authentication to Google. Instead, they use public & private RSA key-pairs. This may seem cumbersome, compared to managing passwords, but indeed it is a more appropriate mechanism for application authentication. And Google Cloud takes care of all the details for managing these keys: generation, rotation, deletion, escrow.

However, there are situations in which you need to manage those keys yourself. For example, if your application calling Google APIs is not running in a VM in Cloud but in a machine in your data center or in your laptop, you will need to download corresponding service account’s credentials containing the private RSA key to be able to authenticate to Google. Then, you take responsibility for managing these credentials and keeping them secure. That’s not an easy task and you should consider using a secrets management solution, like HashiCorp Vault.

The exfiltration problem

Not every company can afford to use a secrets management solution though. Even if you use it there’s still a possibility for those credentials to be leaked outside your company. It may be the case those credentials are stored in a repository (hopefully not a public one) where many developers have access. Or you may need to pass along those credentials to a partner hence losing visibility and control over them.

For example, a common situation for a company is to run their CI/CD pipelines on-premises to deploy cloud infrastructure. They use a highly privileged service account to create projects, VPCs, VMs, firewall rules, and all type of resources, usually employing a provisioning tool like Terraform. The provisioning tool needs the service account credentials to call the APIs. If the credentials are exfiltrated an attacker can use them to access your resources and data, putting your business at risk.

Exfiltration of credentials

Mitigating the risks

As I said, once you download credentials from GCP you are responsible for keep them secure. Still, if the risk of exfiltration is high and doesn’t let you sleep well or if you want to have a plan B just in case, you can leverage Cloud IAM to mitigate that risk and improve your security posture.

Of course, the more privileged the service account, the higher the risk. If you could reduce its permissions you would be in a much better situation if it were compromised, since now the risk of someone accessing your systems and causing harm would be lower. But you still need complete access so, how to achieve both things at the same time?

That’s something you can get by impersonating service accounts. The idea of impersonation is to use one identity A to act as another identity B but without having access to B’s credentials. This is achieved by granting identity A the ability to get an access token for identity B. This is the only permission we will grant to identity A, and whenever it wants to access any resource it will have to present the access token. In Google Cloud, this permission is granted through the Service Account Token Creator role. It allows you to create OAuth2 access tokens for a service account that Google uses to authorize API calls (there are more permissions in this role that I will not deal with in this article).

Let’s take the example of deploying cloud infrastructure through Terraform from on-premises. sa-folder@ is a highly privileged service account you use to access resources in a folder representing an environment, e.g. production. Its credentials shouldn’t leave GCP so we create another service account sa-external@ to use on our CI/CD. We grant to this service account the token creator role on sa-folder@:

Impersonation through Service Account Token Creator role

The following example shows how to do this through ‘gcloud’ commands:

# Create the highly privileged service account
$ gcloud iam service-accounts create sa-folder --project ${PROJECT_A_ID}
# Assign roles on production folder. Using just 'owner' for simplicity
$ gcloud resource-manager folders add-iam-policy-binding ${PROD_FOLDER_ID} \
--member=serviceAccount:sa-folder@${PROJECT_A_ID}.iam.gserviceaccount.com \
--role=roles/owner
# Create the external service account
$ gcloud iam service-accounts create sa-external --project ${PROJECT_B_ID}
# Assign 'token creator' role on sa-folder@
$ gcloud iam service-accounts add-iam-policy-binding sa-folder@${PROJECT_A_ID}.iam.gserviceaccount.com \
--project ${PROJECT_A_ID} \
--member=serviceAccount:sa-external@${PROJECT_B_ID}.iam.gserviceaccount.com \
--role=roles/iam.serviceAccountTokenCreator

Now you can download sa-external@’s credentials to access GCP issuing calls to get access tokens for sa-folder@.

You may wonder, how is this better than the original situation? It is still possible to use sa-external@ to access resources! Of course it is, otherwise we couldn’t fulfill our own requirements. However there’s a level of indirection now, sa-extenal@ is not allowed to access resources in our projects directly so any API call will fail, with the exception of creating a token for sa-folder@. That’s another piece of information an attacker will need besides the credentials, and the added indirection will allow us to put more security controls.

Mitigating the exfiltration of credentials

It is true that the process is a bit more complex now. It requires one more service account and two-step authorization. However the benefits of the increased security outweigh the added complexity. I will elaborate on this in following sections.

Regarding the extra step to authenticate, it only happens once at the beginning and the software flow is not substantially changed. For example, the Google Terraform provider includes this feature and makes it really easy to use it. The following code shows the steps needed:

  • First, declare a Terraform data source to get an OAuth2 access token for the highly privileged service account, sa-folder@. The script is run with sa-external@’s credentials.
provider "google" {
alias = "initial"
}
data "google_service_account_access_token" "default" {
provider = google.initial
target_service_account = "sa-folder@${PROJECT_A_ID}.iam.gserviceaccount.com"
scopes = ["cloud-platform"]
lifetime = "3600s"
}

Note that the token is non-refreshable and can have a maximum lifetime of 1 hour.

  • Then define a provider using the new access token, and use this provider to generate any resource.
provider "google" {
access_token = data.google_service_account_access_token.default.access_token
}
resource “google_compute_network” “vpc_network” {
provider = google
name = “vpc-test”
}

Auditing access

Audit is also simplified and enhanced when using impersonation. For example, auditing access for sa-folder@ means you have to audit every possible resource the service account can access to. Given this service account has access to multiple resources, its logs will be numerous and spread throughout multiple projects. However, an API call to get an access token for a service account produces an audit log. This way you can easily spot when the impersonation was used to access any resource without searching through all your projects. And every audit log from any resource accessed will also include the impersonating service account if one was used, as part of the authentication info.

Audit log of an impersonation API call

This capability can be used to enhance your security in some situations. Imagine you have several partners who require the same access to your systems. You can have one privileged service account and create impersonating service accounts and corresponding credentials per partner. This way, if one of the credentials is exfiltrated you can precisely point to the specific partner who should improve their security controls. Also, through IAM you can disable impersonating permissions for that partner without affecting the rest. The audit log for getting an access token includes other information like the caller IP and caller user agent, so you can develop programmatic solutions to harden your security further.

If you try to achieve similar results without using impersonation you will see how painful it may be. And in particular, if you were thinking on creating several credentials for the same service account to distribute to your partners, it doesn’t help. Every credential has a key identifier but this key id is not recorded in audit logs.

One more layer

We have seen how impersonation can help mitigate risks of exfiltrations in several ways while also improving credentials management. We can go further and, for some cases, add another layer of security. Cloud IAM offers you one more mechanism you can leverage to improve your security posture, alone or in combination with impersonation: Cloud IAM Conditions.

With IAM Conditions, you can specify conditions to enforce attribute-based access control on IAM grants. This means that a role granted to an identity on a resource can be conditioned to certain attributes of the resource like its type, or attributes of the request like the IP from where the API call is made. Which attributes you can use depends on the Google Cloud service, but the number of attributes and services supporting them are growing.

Let’s take the scenario of running a CI/CD pipeline on-premises. You created sa-folder@, and sa-external@ with the token creator role on sa-folder@. Let’s say you only deploy to production on Mondays, after reviewing changes committed and having your SRE team ready for unexpected events. In this case, the permanent grant of sa-external@ on sa-folder@ is really not needed, and an unneeded security risk too. But granting and removing access repeatedly is not practical. You can develop a programmatic solution, however IAM Conditions offers a better way. With it, you can define your own deployment window, granting impersonating permissions only on Mondays during working hours:

Applying IAM Conditions to create a deployment window

By creating this deployment window, you are reducing the opportunity window for an attacker to access your systems. Your SecOps team will appreciate it and feel a bit more relaxed during weekends. And any attempt to access out of this window will generate an audit log that you can act upon.

If a static deployment window doesn’t fit your needs and you need on-demand access, you can create a solution based e.g. on Cloud Functions that on a press of a button configures IAM Conditions to grant temporary access for a certain period of time.

Conclusion

Downloading service account’s credentials should be avoided, management of credentials is a risky task difficult to get right. But sometimes you don’t have an option. To help mitigate the associated risks, I explained some techniques and mechanisms you can easily apply to improve your security posture. You should do your due diligence when managing secrets, and I hope this article will help you!

--

--

Julio Diez
Google Cloud - Community

Strategic Cloud Engineer at Google Cloud, focused on Networking and Security