Patterns for importing clusters into an IBM Multicloud Manager hub cluster
With the release of IBM Multicloud Manager 3.2.0, there are mainly two patterns for connecting a managed-cluster to a hub-cluster. In this article, I want to illustrate those two patterns and then offer up some simple tools to help ensure this process is always reliable!
The Hub cluster
To start, you need to have an IBM Multicloud Manager hub-cluster, which is used to define the IBM Multicloud Manager controller — the central controller that runs in an IBM Cloud Private 3.2.0 cluster. The hub-cluster can be setup by deploying IBM Multicloud Manager with single_cluster_mode
set to false
and the multicluster-hub
service enabled
in the management services list. See the following example of the config.yaml file entry:
single_cluster_mode: false
management_services:
multicluster-hub: enabled
The hub runs the microservice components that allow for managing multiple clusters. The hub must exist before you can import a managed-cluster.
The Managed cluster
Managed-clusters run the Klusterlet application. The Klusterlet is the agent that is installed in a Kubernetes cluster, making it a managed-cluster. The Klusterlet conveys and receives information from the hub-cluster.
Pattern 1
In this pattern, the hub-cluster and the managed-cluster have already been installed as IBM Cloud Private clusters. Also, these clusters are set-up using self-signed certificates.
The kubeconfig for the managed-cluster is retrieved and passed to the cloudctl import
command to import the managed-cluster into the hub. The kubeconfig data flows from the managed-cluster to the hub. See the following example:
The script above can be used to import an existing IBM Cloud Private cluster into a hub. The script takes admin login credentials for both the hub-cluster and the cluster to be managed. The namespace name that will reference the managed-cluster on the hub will be derived from the managed-cluster’s cluster name.
I feel that it is best practice to use the managed cluster’s cluster name as the reference namespace on the hub. This ensures that all your clusters have unique cluster names, which is another best practice.
Note that in the script, we update the generated kubeconfig to allow for insecure connection by skipping the TLS validation step. This is because our clusters are setup with self-signed certificates.
This is an example of running the script with arguments:
./script.sh https://hubcluster:8443 admin admin https://managedcluster:8443 admin admin
Pattern 2
In this pattern, the kubeconfig information of the hub is retrieved and used to import the managed-cluster into the hub. The kubeconfig data flows from the hub to the managed-cluster.
Typically, this pattern is used when installing a new IBM Cloud Private cluster. The kubeconfig is retrieved and stored in the following path:
/opt/ibm/cluster/klusterlet-bootstrap.kubeconfig
To enable the Klusterlet to start up and read the kubeconfig of the hub during install time, the multicluster-endpoint
service needs to be enabled in config.yaml file as you see in the following example:
management_services:
multicluster-endpoint: enabled
If you are deploying IBM Cloud Private with one of the community Terraform templates, you can follow this pattern by updating this the icp-preinstall template variable. This will generate the klusterlet-bootstrap.kubeconfig
file.
variable "icp-preinstall" {
default = [
"wget https://gist.githubusercontent.com/cdoan1/94ff8432f1c4a5b5bf0c499b8bae8787/raw/mcm-setup-kubeconfig.sh",
"chmod 755 mcm-setup-kubeconfig.sh",
"./mcm-setup-kubeconfig.sh https://hubcluster:8443 admin admin"
]
}
You can enable the multicluster-endpoint service by setting the TF_VAR_management_services environment variable. See the following example:
export TF_VAR_management_services='{multicluster-endpoint = "enabled"}'
Either of these patterns can help you connect a managed-cluster to a hub. For more details on the steps for importing clusters, see Importing a target managed-cluster to the IBM Multicloud Manager hub-cluster.