Setting up Config Connector with Terraform & Helm

Adrian Trzeciak 🇳🇴
Google Cloud - Community
5 min readMay 26, 2022

--

Infrastructure as code has come long way since its first adoption. Multiple cloud providers and oss projects have released their own tools in order to move from point-and-click approach to a declarative way of setting up infrastructure and applications with Terraform leading the race.

However, when dealing with security, assignment of permissions of least privileges and creation of separate service accounts, in order to centralize control of service accounts and permissions in Git-repo, you’d have to pass the account’s email as Terraform output to the deployment script. It makes it unnecessarily difficult and complex and you would potentially end up cross-referencing the resources across the projects.

However, what if you could create necessary service account, assign permissions, and annotate it for seamless work with Workload Identity, and deploy it as one Helm release? Well — say hello to Config Connector.

As Google states it, Config Connector is a Kubernetes add-on that allows you to manage Google Cloud resources through Kubernetes. It can be easily enabled for any GKE cluster matching the minimum requirements. In this article, we will be utilizing cluster scoped config connector that basically means that we have only one service account provisioning all GCP resources across all namespaces. There is also a possibility to have a namespace scoped connector utilizing the power of project separation for better IAM and resource management (let me know if I should post a piece on that, I’ll be happy to do so).

In order to enable config connector for your cluster, you have to utilize google-beta provider, declare addons_config block with config_connector_config set to enabled:

We’re now ready to set up the service account that will be running provisioning of config connector declared resources. The account will need IAM permissions within the project as well as one specific IAM binding allowing config connector to impersonate the GSA:

Now that we have a cluster and a service account, we need to instruct config connector operator inside our cluster to use that particular service account for provisioning. If you’re utilizing the GitHub repo I’ve created, you’ll have to apply the infrastructure twice — creating the cluster first, and the remaining resources straight after. Here we apply kubernetes manifest by utilizing yamldecode and templatefile function in order to keep the manifest as yaml and to exchange service account variable with actual e-mail address of the service account:

You can verify the installation by checking out the installed CRDs by running kubectl get crds and kubectl get configconnectors :

> k get configconnectors
NAME AGE HEALTHY
configconnector.core.cnrm.cloud.google.com 3m50s true

You’ll also notice that config connector controller is up and running. This is the workload that will ensure the presence of all necessary resources for config connector to run properly in the cluster. You can easily see what installation it performs by studying the logs of configconnector-operator StatefulSet inside configconnector-operator-system namespace:

> kubectl logs configconnector-operator-0 -n configconnector-operator-system

The operator is responsible of creating cnrm-system namespace, which is the home of cnrm-controller-manager being responsible for provisioning of GCP resources. If you take a closer look at the spec of controller manager, you’ll notice that it runs cnrm-controller-manager service account which is annotated with our GCP service account that we created with Terraform. Following pods should be up and running inside cnrm-system namespace:

> kubectl get po -n cnrm-system
NAME READY STATUS
cnrm-controller-manager-0 2/2 Running
cnrm-deletiondefender-0 1/1 Running
cnrm-resource-stats-recorder-6dfc78996c-szf25 2/2 Running
cnrm-webhook-manager-778cdd84cb-ncs5q 1/1 Running
cnrm-webhook-manager-778cdd84cb-x4xpk 1/1 Running

Since our config connector is up and running, we can deploy our fantastic application that fetches contents of a file from given bucket. Just to make it simple, it listens to “/” endpoint and reads the content of a file based on BUCKET_NAME and FILE_NAME environmental variables set for the deployment. We will pass those from the values.yaml file of the Helm chart:

We will also ask Helm to create and annotate the service account, and expose the service as Load Balancer so we can reach it from outside the cluster:

The chart will also provision 3 Google Cloud Plattform resources:

  • GCP Service Account for utilization of Workload Identity
  • IAM Policy granting our Kubernetes service account permission to act as user of our GCP service account — roles/iam.workloadIdentityUser
  • IAM Policy granting our GCP service account permission to reading objects inside our previously created bucket — roles/storage.objectViewer

The chart will be installed utilizing helm provider with project_id and bucket_name as template variables:

When the installation has been performed, you should notice file-reader namespace inside your cluster containing a replica set, deployment, service account and a service exposing external public IP address. In addition to that, the chart will deploy our GCP resources. If you’ve set up everything properly, those will end up in ready state, which means that the controller manager was able to provision GCP resources successfully:

❯ k get IAMServiceAccounts,IAMPolicyMembers
NAME
iamserviceaccount.iam.cnrm.cloud.google.com/file-reader
NAME
iampolicymember.iam.cnrm.cloud.google.com/file-reader-app-chart-bucket
iampolicymember.iam.cnrm.cloud.google.com/file-reader-app-chart-wi

We can now test our application in order to verify that we have access to the bucket:

> curl <YOUR_EXTERNAL_IP>
"Two peanuts were walking down the street. One was a salted\n"

When navigating in the console, you should notice new service account file-reader@<YOUR_PROJECT_ID>.iam.gserviceaccount.com that has no project-wide permissions, is allowed to access objects of the config-connector-<YOUR_RANDOM_STRING> bucket and <YOUR_PROJECT_ID>.svc.id.goog[file-reader/file-reader] is the workload identity user of that service account.

I believe Config Connector is an amazing and powerful bridge between common infrastructure declaration and application-specific components. I will definitely give it a try in production in nearby future.

Make sure to check out the GitHub repo containing all the source code:

Interested in knowing more about how we at Strise.ai are having fun with Google Cloud Platform and how our environment looks like? Stop by for a coffee or ping us on LinkedIn!

And yes — we’re hiring.

--

--