But sorry this is not the world we live in ! A world where you have to protect yourself and your ressources. Because we are using Cloud ressources, the security mechanism that we chose is probably the most important part of the architecture.
As you can see here, recommended by Google, before using a Key for your Service Account you have many different options that do not require a Key, such as :
- Workload identity
- Default Credentials for Service Account
But in some particular use case you finally really need a Key… such as when connecting your legacy on premise application to the Cloud.
A service account key lets an application authenticate as a service account, similar to how a user might authenticate with a username and password, but without a password !
The problem with a Key is its life cycle, you can monitor what’s going on following another article I wrote here.
But this is not enough, and probably because the most important missing feature in this area is the possibility to set an expiration date when you generate a key. This is currently not possible today with GCP (possible on Azure for example).
Google told me that these two important configurations have to be used to solve my problem :
- using condition under IAM to set an access expiration date
- using a Key Factory solution (most famous being Hashicorp Vault)
My point of view about these 2 points is :
- IAM condition is not enough, first because BigQuery for example doesn’t accept condition at the dataset level, and second because I don’t want to have to go back under IAM every XX days to update the configuration.
- Key Factory is the best option, but not Hashicorp because I don’t want to pay for a black box, and also I don’t want to deploy a complex technology based on GKE + many different features.
Here, in my company, we promote and love serverless technology as much a possible.
So here we are, why not try to build our own Serverless Key Factory ? Oh Yeah !
After discussing with the probably craziest Googler on Security named Seth Vargo , we have found common ground !
Following Google’s recommendation :
“A security best practice is to rotate your service account keys regularly. You can rotate a key by creating a new key, switching applications to use the new key and then deleting old key. Use the
serviceAccount.keys.create() method and
serviceAccount.keys.delete() method together to automate the rotation.”
Here is what I’m doing :
- At the Organization Level we deploy the
constraints/iam.disableServiceAccountKeyCreationorg policy which prevents users from creating user-managed service account keys
- Choose one GCP project (per Zone or Entity of my company) where the policy will not be activated and where all the keys will be generated
- Deploy a pipeline to provide a key after a human request for the first time (approved or not depending on the use case) and give all the renewals of keys to the software directly
This chart need some explanation
- First request is made by Mr. X through ServiceNow Portal, trying to explain why he really needs a Key, and after approval by Security Team the key can be created
- Renewal requests will be made directly by the software itself through a Pub/Sub topic, the classical way!
- Development of the feature from the factory are pushed to a repo that CloudBuild is observing because we’ve put the YAML file of the last Cloud Workflows version in a bucket to be able to deploy it later with Terraform (which today only lets you import YAML file from GCS)
Now let’s deep dive inside the Factory :
After API Gateway has received the secured call from ServiceNow the Cloud Function is preparing the message to be understood by Cloud Build :
- the first Step will be deploy the Key Factory in the specific projet for the Zone (where org policy is disabled and only a few people have permissions), getting the YAML Workflow file from Cloud storage and deploy using Terraform
- the second step is to run the workflow completely to provide a SA Key
- the third step is to give permission to the requestor the possibility to read the Key stored in the secret manager
- the last step (if the request came from ServiceNow) is a call to ServiceNow to close the request with the request information.
Here’s now a deeper look at the Cloud Workflows part (the Second step just explained above) :
- As you can see, I’m trying to know if the SA already exist (if not, I create it)
- Secondly, I create the secret key from the SA (even if another already exists, because maybe it’s renewal time)
- Third, I create a secret related to this key (or just check if the secret already exists)
- Fourth, I add the key payload as the last version of the secret
- Last is the monitoring part, meaning insert a document in FireStore to say :
“the key XXX has been created at 20:10 the 11.05 of 2021 and it’s stored in secret XYZ.”
Complete Cloud Workflows code used is available HERE
Now I have to inform the requestor that something is available ….
The requestor is listening on a PubSub topic, that’s good news because Google just introduced the possibility to automatically publish a message on a topic (without any code from your side) when a new version of a secret is published …. See here.
That will be used by my requestors (whether they are users or software) to be informed when their renewal is ready to be consumed :)
Lastly, but also important, the cleaning part :
Every morning at 8AM Cloud Scheduler will call Cloud Workflows to :
- ask Firestore which are the documents related to a key which is older than 90 days ?
- delete the related IAM Key
- delete the related Secret
- update the firestore Document
I have something very light, which can be deployed where I want, and that costs nothing. Because as you probably know the Cloud Workflows pricing model is very cheap (first 5 000 steps are FREE) and Secret only cost $0.06 per version per location…
Finally, I have to inform people that from the moment where the first key is created, it is their responsibility to come back and ask for a renewal before the next 90 days, otherwise the current key will automatically be deleted ….