Google Config Connector: Deploying Spanner from Kubernetes
Google recently officially released the Google Config Connector, which provides the ability to deploy managed GCP Services like Spanner as if they were a native Kubernetes resource. The Config Connector primarily accomplishes this by defining each managed service as a Custom Resource Definition (CRD) and uses a Kubernetes Operator to manage the lifecycle of these resources.
The Config Connector is an exciting development because it signals two fascinating points from my perspective.
- The Operator Frameworks have continued to mature and are now starting to see real production releases, which is exciting.
- Google is making a bet that even the management of high-level services will run through Kubernetes.
I spent some time recently downloading and installing the Config Connector, and its a very compelling addition to the Kubernetes landscape. I am not the biggest fan of YAML, but I have to admit that being able to define these resources in a few lines is helpful. The Config Connector allowed me to quickly spin up a CloudSQL instance in an automated fashion, which could easily I could then easily integrate into a CD pipeline.
Contrast this to existing tools like Terraform, Cloud Formation, and other IaC tools and it is much faster to get started if you’re only going to be using Kubernetes. The other nice aspect is context-aware dependency management because you’re using a native tool that understands the ecosystem. There are built-in references, which I talk about below, that allow you to halt the creation of CloudSQL users until the underlying CloudSQL instance is healthy. Dependency management, like this, prevents you from having to have endless retries and timeouts while waiting for a service to be healthy.
As mentioned above, I’m also incredibly excited to see the Operator Frameworks continue to mature. As more vendors continue to release operators, they’re able to instill organization level knowledge into customer-facing products, which will hopefully reduce the amount of complexity in managing and maintaining complex databases, queues, and more.
Kubernetes Operators are not magic; of course, there is a lot of work necessary to make them work reliably. However, with the right approach, I think they can significantly reduce the operational burden on teams and allow them to use best-of-breed tools while still allowing them to focus on writing their applications.
Back to the Config Connector. All of my steps are documented below if you wish to follow along or recreate my process. Like I said above, its quite amazing to be able to quickly deploy a CloudSQL or Spanner instance using a few lines of YAML. I did run into a single bug where my CloudSQL resource seemed to get divorced from the underlying API. This issue forced me to recycle my GKE cluster as the Config Connector expected the CloudSQL instance to exist during the delete stage, but it no longer did. Other than that, the time that I have spent working with the Config Connector has been flawless.
Installation/Configuration
Setup GCP Environment
- Create a GCP Project
gcloud projects create config-connector-test
2. Get your GCP Project ID
gcloud projects list --filter=name=config-connector-test
3. Save your Project ID to an ENV Variable
export PROJECT_ID=[PROJECT_ID]
4. Set your Active GCP Project
gcloud config set project $PROJECT_ID
5. Enable Kubernetes Engine API
gcloud services enable container.googleapis.com
6. Create your GKE Cluster
gcloud container clusters create config-connector-cluster --zone=us-central1-a
7. Save your Kubernetes Credentials
gcloud container clusters get-auth config-connector-cluster --zone=us-central1-a
Install Config Connector
- Create a Config Connector Service Account
gcloud iam service-accounts create cnrm-system
2. Generate Service Account Key
gcloud iam service-accounts keys create --iam-account cnrm-system@$PROJECT_ID.iam.gserviceaccount.com key.json
3. Create Config Connector Kubernetes Namespace
kubectl create namespace cnrm-system
4. Create GKE Secret for the Config Connector’s Service Account
kubectl create secret generic gcp-key --from-file key.json --namespace cnrm-system
5. Download and Untar the Config Connector Bundle
gsutil cp gs://cnrm/latest/release-bundle.tar.gz release-bundle.tar.gz
tar zxvf release-bundle.tar.gz
6. Deploy the Config Connector Manifests
kubectl apply -f install-bundle-gcp-identity/
Quick Note: You’ll notice that this deploys several Custom Resource Definitions. This list represents all the available resources that the Config Connector can currently manage. The title image includes a visual representation, and this documentation link lists each resource definition.
7. Create a Kubernetes Namespace for Project
kubectl create namespace $PROJECT_ID
Quick Note: This is important and we will see more complex example later, but Kubernetes namespaces are how the Config Connector maps external resources to GCP Projects.
Examples
PubSub
- Enable PubSub API
gcloud services enable pubsub.googleapis.com
2. Grant the Service Account PubSub Editor Permissions
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:cnrm-system@$PROJECT_ID.iam.gserviceaccount.com \
--role roles/pubsub.editor
3. Create PubSub Manifest
# pubsub.yamlapiVersion: pubsub.cnrm.cloud.google.com/v1beta1
kind: PubSubTopic
metadata:
name: config-connector-pubsub-topic
4. Apply Manifest and Create PubSub Topic
kubectl apply -f pubsub.yaml --namespace=$PROJECT_ID
5. Verify the PubSub Topic has been Created
gcloud pubsub topics list > ---
labels:
managed-by-cnrm: 'true'
name: projects/config-connector-test-268019/topics/config-connector-pubsub-topic
CloudSQL
- Enable the CloudSQL API
gcloud services enable sqladmin.googleapis.com
2. Grant the Service Account CloudSQL Admin Permissions
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:cnrm-system@$PROJECT_ID.iam.gserviceaccount.com \
--role roles/cloudsql.admin
3. Create CloudSQL Manifest
# cloudsql.yaml
---
apiVersion: sql.cnrm.cloud.google.com/v1beta1
kind: SQLInstance
metadata:
name: config-connector-cloudsql-instance
spec:
region: us-central1
databaseVersion: POSTGRES_9_6
settings:
tier: db-n1-standard-1
---
apiVersion: sql.cnrm.cloud.google.com/v1beta1
kind: SQLUser
metadata:
name: config-connector-cloudsql-user
spec:
instanceRef:
name: config-connector-cloudsql-instance
host: "%"
password:
value: "Password1234"
4. Apply Manifest and Create CloudSQL Instance and User
kubectl apply -f cloudsql.yaml --namespace=$PROJECT_ID
5. Verify the CloudSQL Instance has been Created
gcloud sql instances list
RBAC
Kubernetes has an internal concept of Service Accounts, Roles, and RoleBindings, which allow you to create sets of permissions on Kubernetes resources. The Config Connector utilizes Custom Resource Definitions (CRDs), so you can restrict the ability to create external GCP resources via this Service Account functionality. Google’s Documentation on this is quite good. However, I did want to make a note and include a link below.
Google Config Connector RBAC Documentation
Folder/Organization Hierarchy
GCP has an internal hierarchy of Organizations, Folders, and Projects. All of our examples have been with a GKE cluster managing resources inside of its local GCP Project. However, I did want to include a few snippets that showcase the Config Connector creating resources in other Projects, Folders, and even Organizations.
- The policy binding below would allow the Config Connector service account to create any resource in the specified Project.
gcloud projects add-iam-policy-binding [PROJECT_ID] \
--member serviceAccount:cnrm-system@$PROJECT_ID.iam.gserviceaccount.com \
--role roles/owner
- The policy binding below would allow the Config Connector service account to create any resource in the specified Folder.
gcloud resource-manager folders add-iam-policy-binding [FOLDER_ID] \
--member serviceAccount:cnrm-system@$PROJECT_ID.iam.gserviceaccount.com \
--role roles/owner
- The policy binding below would allow the Config Connector service account to create any resource in the specified Organization.
gcloud organizations add-iam-policy-binding [ORGANIZATION_ID] \
--member serviceAccount:cnrm-system@$PROJECT_ID.iam.gserviceaccount.com \
--role roles/owner
Quick Note: All of these policy bindings grant the Config Connector service account an owner role. In production, this would not be the right choice. Ensure that you are using roles with the least privilege.
References/Dependencies
Dependencies are an exciting and valuable component of the Config Connector that allows you to specify implicit resource dependencies. Utilizing our CloudSQL example from above, we’re able to tell Config Connector not to create the CloudSQL User until the underlying CloudSQL instance has been created and is healthy.
---
apiVersion: sql.cnrm.cloud.google.com/v1beta1
kind: SQLUser
metadata:
name: config-connector-cloudsql-user
spec:
instanceRef:
name: config-connector-cloudsql-instance
host: "%"
password:
value: "Password1234"
Similarly, the PubSub Subscription CRD has a topicRef for the same dependency management.
apiVersion: pubsub.cnrm.cloud.google.com/v1beta1
kind: PubSubSubscription
metadata:
name: config-connector-pubsub-subscription
spec:
topicRef:
name: config-connector-pubsub-topic
Conclusion
Google’s Config Connector is a really cool project that focuses on exposing non-native Kubernetes resources within a Kubernetes cluster. I think that this is a really interesting approach as opposed to trying to run everything natively in Kubernetes. It is also really fascinating to see Kubernetes Operator’s continue to rise in popularity. Personally, I much prefer Operators to Helm Charts, so I am excited to see functionality like this continue to become more popular.
Follow me on Twitter!
Sources
Resource Definitions: https://cloud.google.com/config-connector/docs/reference/resources
https://cloud.google.com/config-connector
https://cloud.google.com/config-connector/docs/how-to/securing-access-to-resources