Introduction to Crossplane
How to create any resource on the cloud using Kubernetes manifests and Crossplane.
In the Kubernetes era, all of your application blueprints are packaged into a lot of Kubernetes manifests files or maybe also packages as charts using tools like helm. So how do you create any cloud resource on the cloud? You can maybe use
- An external terraform module to create the resource.
- Use a Kubernetes Job and create the resources using AWS SDK’s.
- Use a bash / Python script and internally call AWS CLI commands.
But how reliable is this? Unlike Kubernetes manifests in which the yaml file can be edited on the fly, every time an attribute changes you will have to explicitly call these dependent resources. And in the modern GitOps era having such external dependencies might not be a feasible option for your GitOps Solutions. How do we fix this then? Here comes Crossplane into the picture. Crossplane enables you to provision, compose, and consume infrastructure in any cloud service provider using the Kubernetes API. Using Crossplane you can create resources on the cloud using simple manifests and can then integrate this with your CI/CD or GitOps pipelines. Crossplane is an open-source project. It is started by Upbound and then later got adopted by the CNCF as a sandbox project.
What is the entire story all about? (TLDR)
- Install Cross plane on our Kubernetes Cluster ( AKS, GKE, EKS, KIND )
- Configure Crossplane to communicate with AWS.
- Install required packages for CrossPlane to communicate with AWS.
- Create a VPC, SG, RDS using CrossPlane from our Kubernetes Cluster.
- Verify that the resources have been created from the AWS Console.
- A Kubernetes cluster ( Can be either On-Prem, AKS, EKS, GKE, Kind ).
- An AWS account.
- GitHub Link: https://github.com/pavan-kumar-99/medium-manifests
- GitHub Branch: crossplane
Install Cross plane in a Kubernetes Cluster
You can use an existing Kubernetes cluster for this demo. Alternatively, you can also install a Kubernetes cluster using kind or using GitHub actions. You can refer to my previous articles on how to create a Kubernetes cluster using
Once you have the Kubernetes cluster created let us now install Crossplane in our cluster. You can clone my repo with the crossplane branch for all the manifests used in this article.
###Clone the repo git clone https://github.com/pavan-kumar-99/medium-manifests.git -b crossplanecd medium-manifests/crossplane-aws#Create the namespace and install the components using helmkubectl create namespace crossplane-system
helm repo add crossplane-stable https://charts.crossplane.io/stable
helm repo update
helm install crossplane --namespace crossplane-system crossplane-stable/crossplane#Check the components are up and healthy kubectl get all -n crossplane-system ( OR ) git clone https://github.com/pavan-kumar-99/medium-manifests.git -b crossplanecd medium-manifests/crossplane-awsmake install_crossplane
Alternatively, you can also use a makefile that I have written. This will install kind in your MAC / Linux machines, create a Kind cluster and then install crossplane in the Kind cluster.
Let us now install the AWS Provider. This will Install all the CRD’s ( Custom Resources Definitions ) required to create resources on the cloud. Ex: rdsinstances.database.aws.crossplane.io, ec2.aws.crossplane.io/v1alpha1, etc.
kubectl apply -f aws-provider.yaml
###Once you install the Provider, wait for the Provider to be healthy by executingkubectl get provider.pkg
Once the Provider is healthy let us now configure the Provider to communicate with AWS by creating a
ProviderConfig definition. Make sure that you have already configured your credentials using AWS configure ( From the cli, if you are running the commands from a local cluster ).
###Generate the configuration files with the AWS Credentials. AWS_PROFILE=default && echo -e "[default]\naws_access_key_id = $(aws configure get aws_access_key_id --profile $AWS_PROFILE)\naws_secret_access_key = $(aws configure get aws_secret_access_key --profile $AWS_PROFILE)" > creds.conf###Create a Kubernetes secret with the configuration file generated. kubectl create secret generic aws-secret-creds -n crossplane-system --from-file=creds=./creds.conf###Once the secret is created let us now create the Provider config for our AWS account.kubectl apply -f provider-config.yaml
Upon successful creation, your local cluster should now be able to communicate with AWS. Let us now try creating the following scenario. Let us create a VPC and a Security Group that would allow access from Port 3306 from anywhere from the world. Let us simultaneously create an RDS and attach the aforementioned SG to the same RDS Instance so that it would be publically accessible. Once this resource is created we will create a pod in our local cluster and check if it can access the RDS Instance. Seems good? Let us now get into action.
Let us create a VPC in the us-east-1 region with the below-mentioned spec.
kubectl apply -f aws-vpc.yaml ###Let us check the status of the VPC. We are now referring to the provider created earlier in ( line no 20 ). kubectl get vpc
Once our VPC is successfully created let us create 2 subnets and attach an internet gateway to our VPC and also add a Route table for the same so that we can create our RDS in these Public subnets and then access from our local pod. However this is not the suggested method in Production, you should never spin your RDS in a Public Subnet in a Production environment.
kubectl apply -f aws-subnet.yaml###Let us check the status of the subnets.kubectl get subnets
Let us now create the corresponding Internet gateway and Route table.
kubectl apply -f aws-igwrt.yaml###Let us check the status of the Route table and Internet Gatewaykubectl get InternetGateway,RouteTable
Let us now create the security group that would allow communication over port 3306 to the Internet. Later we will attach this security group to our RDS Instance.
kubectl apply -f aws-sg.yaml###Let us check the status of the Route table and Internet Gatewaykubectl get SecurityGroup
Let us now create the RDS instance, but before we do that we would need a subnet group in which the RDS instance has to be created. We will use the subnets created earlier in the DB Subnet Group.
kubectl apply -f aws-rds.yaml###Let us check the status of the RDS Instance. The credentials are stored in a secret called production-rds-conn-string in the default namespace. ( line no 56 ) kubectl get RDSInstance
Now let us try to access our Mysql RDS Instance. We can access this by decoding the secret production-rds-conn-string created in the default namespace. You can connect to the database using the MySQL client
mysql -h <hostname> -u <user_name> -p <password>
Alternatively, you can spin up a pod and connect from the pod itself.
###Create a testpod that shows all the databases in the RDS Instance
kubectl apply -f aws-rds-connection-test.yaml
You should now see the databases in the logs of the Pod.
Throughout the article we have hardcoded the names, we also have an option to filter the resources using tags. But for some reason, my resources were not being filtered even after tagging them. If you were able to filter them using the tags please feel free to paste the solution in the comments section.
Until we meet again……….
Thanks for reading my article. Hope you have liked it. Here are some of my other articles that may interest you.
Introduction to Bitnami Sealed Secrets
How to store your secrets in GitHub using Sealed Secrets and Kubese
Introduction to External DNS in Kubernetes
How to automatically create DNS records in Kubernetes using External DNS
Creating a GKE Cluster with GitHub Actions
Automating Kubernetes Cluster creation and Bootstrapping using GitHub Actions