Landing Zone in GCP Using Config Controller and Config Connector

Google Cloud landing Zones

Kiran K
Google Cloud - Community
9 min readOct 30, 2023

--

What is Landing Zone ?

Photo by Raimond Klavins on Unsplash

In the fast moving world, organizations would want to quickly launch and deploy workloads with confidence in you Infrastructure environment. This is where Landing zone, also known as Cloud Foundation Build helps. Landing zone is a modular configuration that allows organizations to adopt Cloud for their business needs. It consist of multiple components including projects, networks, security, resource management etc. Landing Zones can be customised according to organizations needs and how they wanted to adopt security best practices to onboard teams or projects.

Adopt Landing Zone

Organizations have started adopting landing zone approach to get onboarded to Cloud. Consider using Infrastructure as Code approach which helps you to make your deployments modular and repeatable. One such IaC is using KRM(Kubernetes Resource Model). Google Cloud Offers help to setup Landing Zone designed specifically to meet the needs of different industries and business sizes, across the globe. https://cloud.google.com/architecture/landing-zones#help_with_landing_zone_setup

Config Connector resources in GCP

Config Connector provides a collection of Kubernetes Custom Resource Definitions (CRDs) and controllers. The Config Connector CRDs allow Kubernetes to create and manage Google Cloud resources when you configure and apply Objects to your cluster. Example config connector for creating a project in GCP is shown below.

apiVersion: resourcemanager.cnrm.cloud.google.com/v1beta1
kind: Project
metadata:
name: project-id # kpt-set: ${project-id}
namespace: projects # kpt-set: ${projects-namespace}
annotations:
cnrm.cloud.google.com/auto-create-network: "false"
cnrm.cloud.google.com/blueprint: cnrm/landing-zone:project/v0.4.4
spec:
name: project-id # kpt-set: ${project-id}
billingAccountRef:
external: "AAAAAA-BBBBBB-CCCCCC" # kpt-set: ${billing-account-id}
folderRef:
name: name.of.folder # kpt-set: ${folder-name}
namespace: hierarchy # kpt-set: ${folder-namespace}

Starting with Landing Zone using Config Connector

To begin with, we must setup the config controller cluster and the required permissions to manage resources using config controller. Consider a bootstrap project to setup the config controller with relevant permissions to manage the organization/project resources accordingly.

Refer : https://medium.com/google-cloud/infrastructure-as-code-using-config-controller-in-gcp-e7ad467d4227 for config controller setup.

GitOps Approach

As companies expand the number of deployments and production clusters they use, creating and enforcing consistent configurations and security policies across a growing environment becomes difficult. At that point, the choice of management surface is no longer driven by preference, but by capabilities. To address this challenge, it is increasingly common for platform administrators to use “GitOps” methodology to deploy configuration consistently across clusters and environments with a version-controlled deployment process. Using the same principles as Kubernetes itself, GitOps reconciles the desired state of clusters with a set of Kubernetes declarative configuration files in a source control system, namely git.

Kpt Package Manager

Kpt supports management of configuration as Data, which is build on the foundation of KRM(Kubernetes Resource Model). It makes configuration data (packages) the source of truth, stored separately from the live state. kpt has a set of function catalog and GCP functions which enables and simplifies the management of configurations. Refer: https://catalog.kpt.dev/

Setup GitOps

In this section, we will be setting up the GitOps using Cloud Source Repository and Cloud Build resources within GCP. To begin with we must complete Config controller setup as mentioned above. Once done, please follow the below steps to setup the GitOps approach.

  1. Create a folder called “landing-zone” in your Cloud Shell Editor and navigate to the directory.
  2. Clone the package using kpt pkg get https://github.com/GoogleCloudPlatform/blueprints.git/catalog/gitops@$main
  3. Move to the local package cd “./gitops/”
  4. Edit the setters.yaml file. Your setter file looks similar to below. You can configure the required project name, cluster name, source repo and deployment repo and project number.
apiVersion: v1
kind: ConfigMap
metadata:
name: setters
annotations:
config.kubernetes.io/local-config: "true"
data:
# This should be the project where you deployed Config Controller
project-id: project-id
project-number: "1234567890123"
# This should be the name of your Config Controller instance
cluster-name: cluster-name
# You can leave these defaults
namespace: config-control
deployment-repo: deployment-repo
source-repo: source-repo

5. You can execute the catalog either using kpt or kubectl. If you are using kpt run the below commands to render the configurations and apply

a. kpt fn render

b. kpt live init — namespace ${NAMESPACE} (In this case config-control is the namespace).

c. kpt live apply

d. kpt live status — output table — poll-until current

6. If you are using kubectl then navigate to the parent directory of gitops folder and run kubectl apply -f gitops.

Understanding the GitOps Components

Photo by Roman Synkevych on Unsplash

The configsync folder contains the config connector resources to setup the config sync in the config controller cluster. The rootsync.yaml file configures the deployment repo as the single source of truth for the configurations with “main” branch. The repos source format is chosen as unstructured. You can know more about hierarchical and unstructured repo in config sync in the link: https://cloud.google.com/anthos-config-management/docs/how-to/unstructured-repo.

The configsync-iam.yaml file sets the service account and the permissions for config sync to continuously reconcile with the deployment repo. The config-management.yaml configures the ACM with the config controller cluster.

In the root gitops folder, services.yaml enables the cloud source repo and cloud build api’s in GCP. The source-repositories.yaml creates the source repo and deployment repo. Here we use 2 repositories to setup the GitOps. The source repository is where we push the configurations along with the kpt file. The Cloud Build Pipeline is configured on this repo, which then renders the kpt file, applies the setters and pushes the rendered configurations alone to the deployment repo which then reconciles with config sync.

If you want to tweak the pipeline configurations, please refer to the hydration-trigger.yaml.

Landing zone Components

Inside the landing zone folder create a folder called “network” to create the network components. Create a network.yaml file and paste the below CRD’s into the file.

# CRD for Creating a Network
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeNetwork
metadata:
name: network-name # kpt-set: ${network-name}
namespace: networking # kpt-set: ${namespace}
annotations:
cnrm.cloud.google.com/blueprint: cnrm/landing-zone:networking/v0.4.2
cnrm.cloud.google.com/project-id: project-id # kpt-set: ${project-id}
spec:
autoCreateSubnetworks: false
deleteDefaultRoutesOnCreate: false
routingMode: GLOBAL
---
# CRD for creating a Subnetwork
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeSubnetwork
metadata:
name: network-name-subnetwork # kpt-set: ${prefix}${network-name}-subnetwork
namespace: networking # kpt-set: ${namespace}
annotations:
cnrm.cloud.google.com/project-id: project-id # kpt-set: ${project-id}
cnrm.cloud.google.com/blueprint: cnrm/landing-zone:networking/v0.4.2
spec:
description: Subnetwork
ipCidrRange: 10.2.0.0/16 # kpt-set: ${ip-cidr-range}
logConfig:
metadata: INCLUDE_ALL_METADATA
aggregationInterval: INTERVAL_10_MIN
flowSampling: 0.5
networkRef:
name: network-name # kpt-set: ${network-name}
privateIpGoogleAccess: false
region: us-central1 # kpt-set: ${region}
---
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeRouterNAT
metadata:
name: network-name-router-nat # kpt-set: ${prefix}${network-name}-router-nat
namespace: networking # kpt-set: ${namespace}
annotations:
cnrm.cloud.google.com/project-id: project-id # kpt-set: ${project-id}
cnrm.cloud.google.com/blueprint: cnrm/landing-zone:networking/v0.4.2
spec:
natIpAllocateOption: AUTO_ONLY
region: us-central1 # kpt-set: ${region}
routerRef:
name: network-name-router # kpt-set: ${prefix}${network-name}-router
sourceSubnetworkIpRangesToNat: ALL_SUBNETWORKS_ALL_IP_RANGES # kpt-set: ${source-subnetwork-ip-ranges-to-nat}
---
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeRouter
metadata:
name: network-name-router # kpt-set: ${prefix}${network-name}-router
namespace: networking # kpt-set: ${namespace}
annotations:
cnrm.cloud.google.com/project-id: project-id # kpt-set: ${project-id}
cnrm.cloud.google.com/blueprint: cnrm/landing-zone:networking/v0.4.2
spec:
description: example router description
networkRef:
name: network-name # kpt-set: ${network-name}
region: us-central1 # kpt-set: ${region}
---
# If you would like to configure shared-vpc please add the below CRD
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeSharedVPCHostProject
metadata:
name: project-id-sharedvpc # kpt-set: ${project-id}-sharedvpc
namespace: networking # kpt-set: ${namespace}
annotations:
cnrm.cloud.google.com/project-id: project-id # kpt-set: ${project-id}
cnrm.cloud.google.com/blueprint: cnrm/landing-zone:networking/v0.4.2

Create a setters.yaml file inside the network folder and add the below setter configurations

# Setter File 
apiVersion: v1
kind: ConfigMap
metadata:
name: setters
data:
# Required setters
network-name: network-name
project-id: project-id
region: us-central1
# Optional setters
namespace: config-control
prefix: ""
ip-cidr-range: 10.2.0.0/16
source-subnetwork-ip-ranges-to-nat: ALL_SUBNETWORKS_ALL_IP_RANGES

Create a kptfile and add the following:

apiVersion: kpt.dev/v1
kind: Kptfile
metadata:
name: vpc
annotations:
blueprints.cloud.google.com/title: Networking blueprint
config.kubernetes.io/local-config: "true"
info:
description: A Networking Module
pipeline:
mutators:
- image: gcr.io/kpt-fn/apply-setters:v0.1
configPath: setters.yaml

Inside the landing zone folder create a folder called “security” to create the security components. Create another folder called firewalls and create a firewall.yaml file and paste the below CRD’s into the file.

# CRD for Creating Firewall Rules
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeFirewall
metadata:
name: network-name-fw-deny-all-egress # kpt-set: ${network-name}-fw-deny-all-egress
namespace: firewalls-namespace # kpt-set: ${firewalls-namespace}
annotations:
cnrm.cloud.google.com/blueprint: cnrm/landing-zone:networking/v0.4.2
cnrm.cloud.google.com/project-id: project-id # kpt-set: ${project-id}
spec:
priority: 65535
deny:
- protocol: tcp
- protocol: udp
destinationRanges:
- "0.0.0.0/0"
direction: EGRESS
disabled: true # kpt-set: ${allow-default-egress}
enableLogging: false # kpt-set: ${enable-logging}
networkRef:
name: network-name # kpt-set: ${network-name}
---
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeFirewall
metadata:
name: network-name-fw-allow-iap-ssh # kpt-set: ${network-name}-fw-allow-iap-ssh
namespace: firewalls-namespace # kpt-set: ${firewalls-namespace}
annotations:
cnrm.cloud.google.com/blueprint: cnrm/landing-zone:networking/v0.4.2
cnrm.cloud.google.com/project-id: project-id # kpt-set: ${project-id}
spec:
priority: 10000 # kpt-set: ${priority}
allow:
- ports:
- "22"
protocol: tcp
direction: INGRESS
disabled: false
enableLogging: false # kpt-set: ${enable-logging}
networkRef:
name: network-name # kpt-set: ${network-name}
sourceRanges:
- "35.235.240.0/20"
targetTags:
- allow-iap-ssh
---
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeFirewall
metadata:
name: network-name-fw-allow-iap-rdp # kpt-set: ${network-name}-fw-allow-iap-rdp
namespace: firewalls-namespace # kpt-set: ${firewalls-namespace}
annotations:
cnrm.cloud.google.com/blueprint: cnrm/landing-zone:networking/v0.4.2
cnrm.cloud.google.com/project-id: project-id # kpt-set: ${project-id}
spec:
priority: 10000 # kpt-set: ${priority}
allow:
- ports:
- "3389"
protocol: tcp
direction: INGRESS
disabled: false
enableLogging: false # kpt-set: ${enable-logging}
networkRef:
name: network-name # kpt-set: ${network-name}
sourceRanges:
- "35.235.240.0/20"
targetTags:
- allow-iap-rdp

Create a setters.yaml file inside firewalls folder and paste the below:

apiVersion: v1
kind: ConfigMap
metadata:
name: setters
annotations:
config.kubernetes.io/local-config: "true"
data:
priority: "10000"
allow-default-egress: "true"
dont-allow-google-apis: "true"
dont-allow-windows-kms: "true"
enable-logging: "false"
firewall-project-id: firewall-project-id
firewalls-namespace: config-control
google-api-cidr: |
- 199.36.153.8/30
network-name: network-name
project-id: project-id

Create a kptfile and add the following:

apiVersion: kpt.dev/v1
kind: Kptfile
metadata:
name: firewall-common-rules
annotations:
blueprints.cloud.google.com/title: Firewall Common Rules blueprint
config.kubernetes.io/local-config: "true"
info:
description: |
Common firewall rules for projects with a private network.

Included rules:

- allow common ports between private IP ranges
- allow common ports from GCP load balancer ranges
- allow ssh and rdp from GCP IAP ranges

pipeline:
mutators:
- image: gcr.io/kpt-fn/apply-setters:v0.1
configPath: setters.yaml

You can add more security components like organization-policies and create the CRD’s and setters accordingly depending upon what components you would want in your landing zone.

Nomos

The nomos tool is a binary compiled from Go code and you can install it locally.If you do not have the Google Cloud CLI, we recommend that you use gcloud components install nomos to install the nomos tool.

You can monitor the status of Config Sync on all enrolled clusters by using the nomos status command. For each cluster, nomos status reports the hash of the Git commit that was last applied to the cluster as well as any errors that have occurred while trying to apply any recent changes.

The status of your managed resources can be one of the following values:

  • InProgress: The actual state of the resource has not yet reached the state that you specified in the resource manifest. This status means that the resource reconciliation is not complete yet. Newly created resources usually start with this status, although some resources like ConfigMaps are Current immediately.
  • Failed: The process of reconciling the actual state with the state that you want has encountered an error or it has made insufficient progress.
  • Current: The actual state of the resource matches the state that you want. The reconciliation process is considered complete until there are changes to either the wanted or the actual state.
  • Terminating: The resource is in the process of being deleted.
  • NotFound: The resource does not exist in the cluster.
  • Unknown: Config Sync is unable to determine the status of the resource.

Refer https://cloud.google.com/anthos-config-management/docs/how-to/nomos-command for more details.

I would love to share on setting up multi-tenancy using config controller and to import existing resources and allowing to manage using your source of truth. Stay Tuned!

Happy Learning! 😊

--

--