Infrastructure as Code using Kubernetes

Esteban Aramendi
Globant
Published in
8 min readFeb 14, 2023
Photo by Ivan Henao on Unsplash

Config Connector (KCC) is a solution to maintain Cloud Resources as Infrastructure as Code. It is built as an Open Source initiative and runs on Kubernetes clusters. As such, it leverages YAML files to maintain and operate such resources.

Config Connector has two versions: an Add-On for Google Kubernetes Engine (GKE) clusters and a manual installation for other Kubernetes distributions.

The purpose of this article is to install ConfigConnector as an Add-On and create and import resources to manage your infrastructure from one place.

This article will be helpful to any Cloud or DevOps engineer at any level. Knowledge of Google Cloud and the "Infrastructure As Code" process will be needed.

Prerequisites

For you to follow these steps successfully, you will need the following:

  • A Google Cloud Project (GCP) in which we will install Config Connector and deploy your resources.
  • An enabled Application Programming Interface (API) related to Compute, Network, GKE, and Cloud Resource Manager to be able to create the resources.
  • A Virtual Private Cloud (VPC) Network, used as a resource creation example.
  • A Google Kubernetes Engine (GKE) Cluster where Config Connector will run.

Note: KCC needs CloudResource API enabled to work properly therefore you will need to enable it before starting to deploy resources. You can use the command below to do so.

gcloud services enable cloudresourcemanager.googleapis.com

If you need help creating such resources, you can use these links:

You could also use the following gcloudcommands:

# Set default project
gcloud config set project [iac-configconnector]

# Set default computer zone
gcloud config set compute/zone [us-east1]

# Set default compute region/zone
gcloud config set compute/region [us-east1-b]

# Create a GKE cluster
gcloud container clusters create configconnectordeplyments\
--zone us-east1-b

Note: If you are testing it, you can add the --spot flag to the last command to reduce costs.

The expected output of the last command should be similar to the following:

NAME: configconnectordeplyments
LOCATION: us-east1-b
MASTER_VERSION: 1.24.8-gke.2000
MASTER_IP: 34.139.195.104
MACHINE_TYPE: e2-medium
NODE_VERSION: 1.24.8-gke.2000
NUM_NODES: 3
STATUS: RUNNING

Installation steps

We need to create a Google service account that will be used as the deployer of all the resources in the GCP project, folder, or organization.

gcloud iam service-accounts create conf-connect

Once created, we will add the Owner role in the respective scope of the cluster.

gcloud projects add-iam-policy-binding iac-configconnector  \
--member serviceAccount:conf-connect@iac-configconnector.iam.gserviceaccount.com \
--role roles/ownerbas

Note: Assigning Owner or predefined roles is rarely the case and not a best practice: it will surely grant more access than needed. This is only for testing purposes. For productive or more formal environments, please create a custom role containing all of the permissions of the resources you will create through Config Connector.

We will create two namespaces, one for the KCC workloads and another one for resource references ( configconnect-ns as an example).

kubectl create namespace cnrm-system (for KCC)
kubectl create namespace configconnect-ns (example)

Create a key for the previously created Google Service Account and store it in a key.json file.

gcloud iam service-accounts keys create --iam-account \
conf-connect@iac-configconnector.iam.gserviceaccount.com key.json

Note: Again, this should be avoided in formal or productive environments. For such scenarios,, please refer to the following documentation for a better approach.

Download the config connector bundle package:

gsutil cp gs://cnrm/latest/release-bundle.tar.gz release-bundle.tar.gz

Untar the file.

tar -xvf release-bundle.tar.gz

Install config connector bundle:

kubectl apply -f install-bundle-gcp-identity/

Install Config Connector CLI Toolkit to import and export resources.

Now you need to set the scope of the Config connector cluster. Choose one of the following three cases that apply to your use case. For our example, we will select # Project:

# Folder:
kubectl annotate namespace \
configconnect-ns cnrm.cloud.google.com/folder-id=[FOLDER_ID]

# Project (We will use this one for the demo):
kubectl annotate namespace \
configconnect-ns cnrm.cloud.google.com/project-id=iac-configconnector

# Organization:
kubectl annotate namespace \
configconnect-ns cnrm.cloud.google.com/organization-id=[ORGANIZATION_ID]

Config the namespace delegated to a default namespace: this avoids the use of the --namespace parameter in every kubectlcommand ( configconnect-ns for this example):

kubectl config set-context --current --namespace configconnect-ns

Connection to the KCC Cluster

To connect to an existing KCC cluster, you will need to add this cluster to your local kubeconfig file. Please check the process in this document.

You can list all the resources available to be deployed/managed by KCC using this command:

kubectl get crds --selector cnrm.cloud.google.com/managed-by-kcc=true

Or by checking the following documentation.

Using this URL, you can check the YAML structure and parameters needed for every service deployed through KCC.

You could check this also using this command (compute subnetwork as an example):

kubectl describe crd computesubnetworks.compute.cnrm.cloud.google.comb

You can get the Name of the resource performing the previous command.

kubectl get crds --selector cnrm.cloud.google.com/managed-by-kcc=true

After you decide the kind of resource(s) that will be deployed, then you need to enable the API related to each resource to be deployed in case of being disabled; this action could be performed with gcloud commands in the Console or using a YAML through KCC.

The YAML template to enable an API is:

apiVersion: serviceusage.cnrm.cloud.google.com/v1beta1
kind: Service
metadata:
name: networkmanagement.googleapis.com // this service as an example

Following the example of the networking resource case, a YAML structure for VPC creation and subnetwork (a single YAML could be used to deploy more than one resource) looks like this:

apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeNetwork
metadata:
name: kccproject-op
spec:
routingMode: REGIONAL
autoCreateSubnetworks: false
---
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeSubnetwork
metadata:
labels:
label-one: "cloudbuild"
name: subnetwork-vm
spec:
ipCidrRange: 10.100.10.0/24
region: us-east1
description: IaC subnet for Instances
privateIpGoogleAccess: false
networkRef:
name: kccproject-op
logConfig:
aggregationInterval: INTERVAL_15_MIN
flowSampling: 0.5
metadata: INCLUDE_ALL_METADATA

The command to deploy the GCP resource is the following:

kubectl apply -f create_network.yaml

Note: create_network.yaml is the file where you will have the code to create your resources in YAML language. You can name this file as you see fit.

To check a deployment status or some error, you can use this command to review the logs in the Controller manager workload:

kubectl logs cnrm-controller-manager-0 --namespace=cnrm-system

The expected output should be similar to the following:

my-user@cloudshell:~/.kcc: my-project$ kubectl logs cnrm-controller-manager --namespace=cnrm-system | tail -10
{"level":"info":"ts":[REDACTED],"logger":"computesubnetwork-controller","msg":"reference ComputeNetwork configconnect-ns/kccproject-np is not ready","resource":["namespace":"configconnect-ns","name":"subnetwork-vm"]}
{"level":"info":"ts":[REDACTED],"logger":"computesubnetwork-controller","msg":"starting reconcile","resource":["namespace":"configconnect-ns","name":"subnetwork-vm"]}
{"level":"info":"ts":[REDACTED],"logger":"computesubnetwork-controller","msg":"reference ComputeNetwork configconnect-ns/kccproject-np is not ready","resource":["namespace":"configconnect-ns","name":"subnetwork-vm"]}
{"level":"info":"ts":[REDACTED],"logger":"computesubnetwork-controller","msg":"successfully finished reconcile","resource":["namespace":"configconnect-ns","name":"subnetwork-vm"}]
{"level":"info":"ts":[REDACTED],"logger":"computesubnetwork-controller","msg":"starting reconcile","resource":["namespace":"configconnect-ns","name":"subnetwork-vm"]}
{"level":"info":"ts":[REDACTED],"logger":"computesubnetwork-controller","msg":"creating/updating underlying resource","resource":["namespace":"configconnect-ns","name":"subnetwork-vm"]}
{"level":"info":"ts":[REDACTED],"logger":"computesubnetwork-controller","msg":"successfully finished reconcile","resource":["namespace":"configconnect-ns","name":"subnetwork-vm"}]
{"level":"info":"ts":[REDACTED],"logger":"computesubnetwork-controller","msg":"starting reconcile","resource":["namespace":"configconnect-ns","name":"subnetwork-vm"]}
{"level":"info":"ts":[REDACTED],"logger":"computesubnetwork-controller","msg":"underlying resource up to date","resource":["namespace":"configconnect-ns","name":"subnetwork-vm"]}
{"level":"info":"ts":[REDACTED],"logger":"computesubnetwork-controller","msg":"successfully finished reconcile","resource":["namespace":"configconnect-ns","name":"subnetwork-vm"}]

And then you can also check in the Console:

  • Click on the "Navigation Menu".
  • Select "VPC Network".
  • And now you will have a new VPC network called "kccproject-op" with a subnetwork associated with it called "subnetwork-vm," where both of them have been created from KCC.
Resource creation example.

If you need to delete a resource, you need to specify the YAML associated with this in the creation process using the command.

kubectl delete -f <file_name>.yaml

Listing managed objects by KCC

In the GKE cluster section, you can go to Object Browser, and there you will be able to see the objects managed by KCC alongside with the object properties, and you will even have the option to download a YAML file with the configuration for that resource.

And then you can also check in the Console:

  • Click on the "Navigation Menu".
  • Select "Kubernetes Engine".
  • Select "Object Browser".
  • And there you will see the objects that KCC has created.
Listing resources example.

Importing existing resources into KCC

For this scenario, we will create a bucket from the GCP Console. After its creation, we will execute the following command that will export all bucket objects into a YAML file.

bucket_name = my_bucket_$RANDOM
gcloud asset export --content-type resource --project kubeconfig-test-poc --asset-types storage.googleapis.com/Bucket --output-path "gs://$bucket_name/my_objects.json"

We will download the file exported from the previous command.

gsutil cp gs://$bucket_name/my_objects.json .

And now, by using the config-connector toolkit, we will import it.

config-connector bulk-export -i ./my_objects.json

And with this, you can now obtain the JSON file to import it into KCC.

apiVersion: storage.cnrm.cloud.google.com/v1beta1
kind: StorageBucket
metadata:
annotations:
cnrm.cloud.google.com/force-destroy: "false"
cnrm.cloud.google.com/project-id: REDACTED
name: kubeconfig-test-poc-bucket-outside-config-123sad
spec:
location: US-EAST1
publicAccessPrevention: enforced
resourceID: kubeconfig-test-poc-bucket-outside-config-123sad
storageClass: STANDARD
uniformBucketLevelAccess: true
---
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMPolicy
metadata:
name: kubeconfig-test-poc-bucket-outside-config-123sad-iampolicy
spec:
bindings:
- members:
- projectEditor:REDACTED
- projectOwner:REDACTED
role: roles/storage.legacyBucketOwner
- members:
- projectViewer:REDACTED
role: roles/storage.legacyBucketReader
- members:
- projectEditor:REDACTED
- projectOwner:REDACTED
role: roles/storage.legacyObjectOwner
- members:
- projectViewer:REDACTED
role: roles/storage.legacyObjectReader
resourceRef:
apiVersion: storage.cnrm.cloud.google.com/v1beta1
external: kubeconfig-test-poc-bucket-outside-config-123sad
kind: StorageBucket
---
apiVersion: storage.cnrm.cloud.google.com/v1beta1
kind: StorageBucket
metadata:
annotations:
cnrm.cloud.google.com/force-destroy: "false"
cnrm.cloud.google.com/project-id: REDACTED
labels:
managed-by-cnrm: "true"
name: kubeconfig-test-poc-bucket-df82763sa
spec:
lifecycleRule:
- action:
storageClass: NEARLINE
type: SetStorageClass
condition:
age: 30
withState: ANY
location: US
publicAccessPrevention: inherited
resourceID: kubeconfig-test-poc-bucket-df82763sa
storageClass: STANDARD
uniformBucketLevelAccess: true
versioning:
enabled: false
---
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMPolicy
metadata:
name: kubeconfig-test-poc-bucket-df82763sa-iampolicy
spec:
bindings:
- members:
- projectEditor:REDACTED
- projectOwner:REDACTED
- user:REDACTED
role: roles/storage.legacyBucketOwner
- members:
- projectViewer:kubeconfig-test-poc
role: roles/storage.legacyBucketReader
- members:
- projectEditor:REDACTED
- projectOwner:REDACTED
role: roles/storage.legacyObjectOwner
- members:
- projectViewer:REDACTED
role: roles/storage.legacyObjectReader
resourceRef:
apiVersion: storage.cnrm.cloud.google.com/v1beta1
external: kubeconfig-test-poc-bucket-df82763sa
kind: StorageBucket

You can save this information in a YAML file and import it using the command.

kubectl apply -f bucket-outside.yaml
storagebucket.storage.cnzm.cloud.google.com/config-export-buckeet created 
iampolicy.iam.cnrm.cloud.google.com/config-export-buckeet-iampolicy created
storagebucket.storage.cnrm.cloud.google.com/dato8984akd created
iampolicy.iam.cnrm.cloud.google.com/dafo8984skd-iampolicy created

Instead of importing one bucket at a time, you can import all the existing buckets using their resource type.

By using gcloud asset export let's make an export of the asset types for the bucket.

gcloud asset export  --content-type resource  --project <project-id>  --asset-types storage.googleapis.com/Bucket  --output-path "gs://$bucket_name/my_objects.yaml" --format=yaml

Copy it to your pwd .

gsutil cp gs://$bucket_name/my_objects.yaml .

Change the file from YAML to JSON.

mv my_objects.yaml my_objects.json

And by using both tools, config-connector and kubectl import it to your configuration.

config-connector bulk-export -i ./my_objects.json | kubectl apply -f -

[redacted]cloudshell:-/linux/amd64[redacted]$ ./config-connector bulk-export -i ./my_file.json | kubectl apply -f -
storagebucket.storage.cnzm.cloud.google.com/config-export-buckeet created iampolicy.iam.cnrm.cloud.google.com/config-export-buckeet-iampolicy created storagebucket.storage.cnrm.cloud.google.com/dato8984akd created
iampolicy.iam.cnrm.cloud.google.com/dafo8984skd-iampolicy created
Config Connector sample diagram/workflow.

Conclusion

Following this post, you can install Config Connector in your GKE Cluster. Afterward, you will manage resources and import existing ones into KCC, and to do so, you will be using YAML files and the K8s command line.

Further Reading

Google Cloud Config Connector Overview.
How to import and export resources documentation.
Bulk import and export resources documentation.

--

--