Using Crossplane to Provision a Kubernetes Cluster in Google Cloud

Adriana Villela
Dzero Labs
Published in
10 min readMay 17, 2021
Photo by Dzero Labs

The New Kid in Town

Ever since a former co-worker pointed me in the direction of Crossplane, I’ve been both obsessed and intrigued. I just HAD to try it out for myself. In case you’re not familiar with it, Crossplane is an open-source a Kubernetes-native tool for provisioning Cloud infrastructure. It was initially released in December 2018.

You might be wondering…with Terraform, Pulumi, and even Ansible in the mix, why should I consider Crossplane?

Fair question! There are a few things that really intrigued me about Crossplane:

Okay…enough of me yapping. Let’s dig in!

Creating a GKE Cluster Using Crossplane

This tutorial will guide you through the creation of a Kubernetes cluster on Google Cloud using Crossplane.

Pre-requisites

This tutorial assumes that:

  • You have an existing Google Cloud project
  • You’ve created a Service Account in Google Cloud
  • You’ve created a Google Kubernetes Engine (GKE) cluster before
  • You have an existing Kubernetes cluster up and running, since we’ll be installing Crossplane on that cluster.
  • You have the envsubst command installed on your machine. You can find instructions for Mac and RedHat/CentOS here. On Ubuntu, use apt-get install gettext.

1- Clone the tutorial repo

Let’s begin by cloning the tutorial repo:

git clone git@github.com:d0-labs/crossplane-gke.git

2- Set up your environment variables

For your convenience, I’ve created a file called env_vars.sh in the scripts folder:

Replace the values in <...> with your own as follows:

<gcp_project_id>: This is the name of our Google Cloud project. If you’re wondering what your project name is, use this command:

gcloud projects list

Sample output:

PROJECT_ID        NAME              PROJECT_NUMBER
aardvark-project aardvark-project 112233445566

Use the value returned in the PROJECT_ID column.

<gcp_service_account_name>: The name of the service account for your Google Cloud project.

<gcp_service_account_keyfile>: This is the fully-qualified name of the JSON service account private key stored on your local machine. For example, /home/myuser/my-sa.json, if your file is located in the /home/myuser folder. Or my-sa.json if your file is located in your current working directory.

Note: This JSON private key is generated upon creation of the Service Account, to be sure to store it somewhere safe (and not in version control). Per Google’s docs on Service Account Keys, “After you download the key file, you cannot download it again.”

<gke_cluster_name>: Name of the Kubernetes cluster on Google cloud where you’ll be installing Crossplane.

<gke_cluster_zone>: Name of the zone in which your Crossplane Kubernetes cluster resides.

Note: This tutorial assumes that the k8s cluster you’re using to install Crossplane is running on Google Cloud. Feel free to comment out/remove lines 10 and 11 above as needed.

Save your changes, and head on over to the next step. We’re not executing this file at this point.

2- Configure Google Cloud

Note: If the Kubernetes cluster on which you’re installing Crossplane is not on Google cloud, you can skip this step.

Let’s make sure that you’re all set up properly in Google Cloud. The 1-gcp_config.sh script activates your Service Account to authenticate against Google Cloud, set your GCP project to the one where your (soon-to-be) Crossplane Kubernets cluster is located, and connects to that cluster.

Run the script:

./scripts/1-gcp_config.sh

Sample output:

3- Install Crossplane on your Kubernetes cluster

Per Crossplane’s docs, you can either install Crossplane on your own Kubernetes cluster, or you can use their hosted service, Upbound Cloud.

Note: Upbound Cloud is nothing more than a Cloud-hosted Kubernetes cluster that already has Crossplane installed for you. This is definitely a very viable option, as it spares you the need to provision a Kubernetes cluster just to run Crossplane.

If you choose to use Upbound Cloud instead, feel free to skip this step.

For our example, we’ll be installing Crossplane on an existing Kubernetes cluster using the 2-install_crossplane.sh script below:

This script installs Crossplane v1.2.0 on your Kubernetes cluster using Helm (lines 6–9). Let’s go ahead and do that:

./scripts/2-install_crossplane.sh

Sample output:

4- Install & Configure the Crossplane GCP Provider

This script installs and configures the Crossplane GCP provider. A Provider is code used by the Cloud infrastructure provisioning tool (in this case, Crossplane) that allows it to interact with the target Cloud service’s API (in this case, the GCP API). The Cloud service’s API is what’s used to provision the infrastructure. It’s the same concept in Terraform and Pulumi.

Line 6 uses kubectl to install the GCP Provider. The Provider is a custom resource. We could also use Crossplane’s CLI (it’s a kubectl plugin) to do that, but I prefer this approach, since it’s one less thing to install, and it’s declarative, so you can keep it version-controlled.

The GCP Provider YAML looks like this:

Note: You can find this YAML in Crossplane’s own provider-gcp GitHub repo.

After installing the GCP Provider, we must configure the Crossplane provider, by telling it about our GCP project and how to authenticate, so that Crossplane can actually provision infrastructure there. This happens in Line 16 of the shell script.

Although I could’ve hard-coded my GCP Provider configs in a YAML file, I’ve chosen to do some templating instead, using the Linux envsubst command (see prerequisites section on how to install). This command replaces environment variables in a specified file (in our case, provider-config-gcp.template.yml), with their values. In our script, we set the environment variables in Line 3 of the shell script, when we call . ./scripts/env_vars.sh. The appropriate values are then replaced in provider-config-gcp.template.yml, and we end up with a new file called provider-config-gcp.yml, with the replaced values from env_vars.sh.

The provider-config-gcp.yml file will look something like this:

In the file above, we’re creating Kubernetes Secret which contains our GCP Service Account key file. The Secret is used by a ProviderConfig custom resource to authenticate to our Google Cloud project, my-gcp-project. This gives Crossplane permission to create resources in my-gcp-project.

Now that we understand what the shell script is doing, let’s go ahead run it, so that we can install and configure the GCP Provider:

./scripts/3-configure-gcp-provider.sh

Sample output:

4- Provision a GKE cluster

With Crossplane installed and our GCP Provider configured, we can finally provision our cluster! The script below does this for us:

Looking at gke-install.yml, we see that this is where the magic happens. Here, we’re provisioning a GKE cluster and nodepool:

If you’ve provisioned a GKE cluster with Terraform, Pulumi, or Ansible, the fields may look familiar. Be sure to check out the GKECluster API docs and the NodePool API docs for more info on these fields.

Note: The GKECluster and NodePool custom resources are not namespaced (i.e. don’t belong to a namespace).

To create the cluster, let’s run the script:

./scripts/4-create_gke_cluster.sh

Sample output:

We can check the status of our cluster:

kubectl describe gkecluster gke-crossplane-cluster

If you scroll down to the end of your output to the Events section, you’ll see something like this, which tells us that our cluster is being provisioned:

If you take a peek at your Google Cloud Console, under Kubernetes Engine > Clusters, you’ll see something like this:

And if you click on the cluster, and then the Nodes tab, you’ll see that your node pool is being provisioned as well:

Don’t panic if you see the error and warning messages like the ones above. The node pool is still being created.

To check on the status of the node pool creation:

kubectl describe nodepool gke-crossplane-np

Sample output:

It takes a while to create the node pool, so keep running the above command over and over until you get a message like the one highlighted above.

The overall cluster creation process should take about 5–10 minutes.

5- Connect to the cluster

If all goes well, you should now have a brand-spaking-new Kubernetes cluster. Before we can check our cluster, we need to add it to our kubeconfig:

gcloud container clusters get-credentials gke-crossplane-cluster --zone us-central1-a --project <your_project>

Be sure you replace <your_project> with your actual GCP project name.

Now, let’s do a quick spot check. First, let’s make sure that the cluster is in our kubeconfig:

kubectl config get-contexts

We should get an output that looks something like this:

Yup. There’s our cluster!

Let’s also run a quick command in our cluster to check the namespaces:

kubectl get nodes

Your output should look something like this:

We provisioned 2 nodes, and we see two nodes.

And let’s peek into our namespaces:

kubectl get ns

There you go! We’ve got ourselvs a GKE cluster!

6- Delete the cluster and nodepool

To delete the cluster, all you need to do is delete the GKECluster resource from Kubernetes.

kubectl delete gkecluster gke-crossplane-cluster

The command prompt won’t return until the cluster has been deleted. You can also check deletion status in the Google Cloud Console.

For good measure, also delete the NodePool:

kubectl delete nodepool gke-cluster-np

Thoughts on Crossplane for Cloud Provisioning

Crossplane hasn’t been around for very long, and it shows. While there’s a lot of support for AWS resources, support for Google Cloud and Azure is rather lacking in comparison. That said, I’m sure that as it picks up steam, the good folks at Crossplane will be adding more resources from those two Cloud Providers (and others) into the mix.

As far as open source software goes, I found it relatively straightforward to get going using their getting started guide. I did run into a bit of a pickle initially, as I found myself reading docs from both version 1.2.0 and 0.7, not realizing it at the time, and wondering why things weren’t working as they should. That was my my fault…but since it happened to me, it could happen to you, so just make sure you’re looking at the right version of the Crossplane docs when you’re trying stuff on your own.

My only complaint with the quickstart is that I really don’t understand why Crossplane suggests installing their CLI and then installing your desired Provider package, when you could easily do it declaratively by applying the Provider YAML (like for GCP) using kubectl.

I like the fact that Crossplane has the Upbound Cloud managed Crossplane service, so you don’t have to figure out how to set up and configure your own k8s running Crossplane just to be able to provision Cloud infrastructure.

It’s a nice option to have, though I didn’t find it too bad installing it on an existing k8s cluster.

Note: My example had you installing Crossplane on an existing GKE cluster to provision another GKE cluster (very meta). I’m guessing that this is not a very common use case, and that the makers of Crossplane expect you to either use the Upbound Cloud managed Crossplane service or to install Crossplane on a local k8s cluster like KinD or MiniKube.

Conclusion

Crossplane is a Kubernetes-native tool for provisioning Cloud infrastructure.

Unlike Terraform and Pulumi, which keep state files, and let you alter Cloud infrastructure outside the tool (with possibly devastating consequences), Crossplane doesn’t let you get away with that.

Like Ansible, it adheres to the principles of Infrastructure as Data, using YAML to declaratively provision Cloud resources. This definitely puts it in my good books. ❤️

Right now, I’m still leaning towards Ansible for provisioning Cloud resources in Google Cloud or Azure at the moment, since the number of resources supported by Crossplane for these providers is a fair bit lower compared to AWS. That said, don’t count Crossplane out of the mix. It’s definitely worth keeping an eye on it to see what the future brings!

And now, I will leave you with a picture of our sleepy rat, Susie, in my husband’s arms.

Photo by Dzero Labs

Peace, love, and code.

Further Reading

Be sure to check out other posts in my Cloud infrastructure series!

References

--

--

Adriana Villela
Dzero Labs

DevRel | OTel End User SIG Maintainer | {CNCF,HashiCorp} Ambassador | Podcaster | 🚫BS | Speaker | Boulderer | Computering 20+ years | Opinions my own 🇧🇷🇨🇦