In the past years, Kubernetes has been the nucleus of container orchestration frameworks. With the growing number of microservices, managing clusters at scale has become an imperative requirement. At Condé Nast, this constitutes in having a stable and coherent approach to deploy, manage and upgrade multiple Kubernetes clusters that are distributed globally. Henceforth, this blog post aims to present an overview of how Condé Nast prototypes tools, such as ClusterAPI, to ensure a sustainable cluster provisioning mechanism.
Over time, multiple tools emerged within the ecosystem, providing bootstrap capabilities for Kubernetes clusters hosted on various infrastructure providers (e.g. AWS, GCP, Azure, OpenStack). Kubeadm, tectonic installer, kops, kubespray are just a few tools that are widely used within the community. However, it is difficult to find a common denominator when it comes to the supported cloud providers for each tool.
As an example, at Condé Nast, tectonic installer (provided by CoreOS) is the tool of choice for cluster launch in AWS. Using Terraform, tectonic-installer enables the deployment of multi-master, multi-node, upstream Kubernetes clusters. However, tectonic-installer is no longer under active development and its features are to be converged with Red Hat OpenShift Container Platform. Considering the circumstances, the cloud platforms team at Condé Nast is in the process of investigating and trialling diverse mechanisms for cluster provisioning, including ClusterAPI.
ClusterAPI provides a declarative set of APIs for cluster creation, configuration, management and deletion. It is a tool that aims to expose a unified and sustainable interface for cluster initialization on-prem and with supported cloud providers. ClusterAPI is currently in v1alpha2 release and integrates with 12 major infrastructure providers.
With ClusterAPI, 2 types of clusters can be distinguished:
- management cluster — hosts the controller managers for ClusterAPI, bootstrap and infrastructure providers. It is a hard dependency for the target clusters provisioning.
- target cluster — a Kubernetes cluster which is created and managed by the management cluster.
For testing/development purposes, use kind for the bootstrap of the management cluster. Alternatively, it is recommended to use a production-grade Kubernetes cluster for a more sustainable solution for ClusterAPI. Generally, the management cluster will host 3 managers:
- core CRDs — manages the lifecycle of the ClusterAPI CRDs (e.g. Machine, Cluster)
- bootstrap provider — generates the configuration necessary to bootstrap an instance into a Kubernetes node. Currently, kubeadm and Talos are the supported bootstrapping mechanisms.
- infrastructure providers — consumes bootstrap configuration and generates resources in the infrastructure provider by choice.
Additionally, ClusterAPI introduces core types, as custom resource definitions (CRDs), to manage the provisioning of the target clusters. The core CRDs are:
- Cluster — contains the details required by the infrastructure provider to create a Kubernetes cluster (e.g. CIDR blocks for pods, services).
- Machine — encapsulated the configuration of a Kubernetes node (e.g. kubelet version)
- MachineSet — ensure the desired number of Machine resources are up and running at all times (similar to ReplicaSet)
- MachineDeployment — reconciles changes to the Machine resources, by having a solid rolling-out strategy between MachineSets configurations (similar to Deployments)
It is noteworthy to mention that ClusterAPI perceives Machines as immutable resources. If the configuration for a Machine resource is updated, then a new resource with the latest config will be created. Henceforth, it is recommended to use MachineDeployment for rolling-out changes to the Machine objects.
This section of the blog post will provide instructions on how to provision a Kubernetes cluster in AWS by using ClusterAPI and kubeadm as the bootstrap mechanism. In this example, we will use kind, which simplifies the creation of the management cluster on the local machine. To spin up a kind cluster, use the following commands:
kind create cluster --name=clusterapi### export the kubeconfig file created by kind
export KUBECONFIG="$(kind get kubeconfig-path --name="clusterapi")"
As mentioned, the management cluster hosts 3 controller managers for ClusterAPI (core CRDs, bootstrap and infrastructure providers). To install these components use the following commands:
### install the ClusterAPI CRDs managers
kubectl create -f https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.2.5/cluster-api-components.yaml### install kubeadm bootstrap manager
kubectl create -f https://github.com/kubernetes-sigs/cluster-api-bootstrap-provider-kubeadm/releases/download/v0.1.3/bootstrap-components.yaml
For ClusterAPI AWS provider, it is required to generate the IAM roles and policies for the cluster. This can be achieved with
For more details on how to install and use
clusterawsadmfollow this guide.
Once the above pre-requisites are fulfiled, the next step is to install the AWS infrastructure controller manager:
### Create the base64 encoded credentials using clusterawsadm.
### This command uses your environment variables and encodes
### them in a value to be stored in a Kubernetes Secret.
export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm alpha bootstrap encode-aws-credentials)### Create the components
curl -L https://github.com/kubernetes-sigs/cluster-api-provider-aws/releases/download/v0.4.2/infrastructure-components.yaml \
| envsubst \
| kubectl create -f -
At this stage, the management cluster has all the necessary dependencies, and this enables the creation of target clusters. The target cluster configuration is described using ClusterAPI core CRDs. These will contain details of how the AWS EC2s should be configured and added as nodes to the Kubernetes cluster, desired instance type, region, subnets etc.
Note: Refer to the ClusterAPI usage documentation for a full YAML representation of Cluster and Machine resources described in the following section. The configuraiton below is indicative and was purposefully shortened.
To launch a target cluster, it is required to deploy a Cluster object. This will ensure that the underlying infrastructure and networking components are created within the AWS environment. For example, the configuration below will ensure the creation of a VPC, security groups, load balancers etc. in the us-east-1 region and attach a default ssh-key to the bastion instance. (The bastion instance is used to expose the cluster nodes to external traffic/internet).
The next step is to launch the control plane or the master nodes. Applying the YAML configuration below will invoke a kubeadm init operation, which will translate the bootstrap object into a cloud-init script. As a result, the EC2 instance will be added to the Kubernetes cluster as a master node.
### Refer to the ClusterAPI usage documentation for a
### full YAML representation of the control plane Machine resourceapiVersion: cluster.x-k8s.io/v1alpha2
The last step is to supply the worker nodes to the target cluster. This is done by triggering a join kubeadm command, that will attach the Machine resources to the existing control plane. In this instance, the MachineDeployment will create 2 worker nodes running Kubernetes v1.15.3. As part of the infrastructure reference (e.g. AWSMachineTemplate), it is possible to attach a predefined IAM role to the instance and associated ssh-keys.
### Refer to the ClusterAPI usage documentation for a
### full YAML representation of the MachineDeployment resourceapiVersion: cluster.x-k8s.io/v1alpha2
Once the above YAML manifests are applied, a fully working Kubernetes cluster with 1 master and 2 worker nodes should be provisioned.
And that’s it!
ClusterAPI is an immensely compelling and powerful tool. The problem it tries to solve recurs frequently within the open-source community, particularly how a Kubernetes cluster can be deployed to multiple infrastructure providers having minimal changes to the existing manifests. ClusterAPI has a robust set of capabilities and its further development will only enhance the effortless deployment of Kubernetes clusters in a cross-platform environment.