Explore Kubernetes Multitenancy With vCluster Using the GitOps Approach

Eleni Grosdouli
6 min readMay 3, 2024

Introduction

In previous posts, we discussed the Multicloud concept and Kubernetes deployments on hybrid/multicloud environments. A logical progression would be to add multitenancy into our discussions.

“Multitenancy (or multi-tenancy) refers to a single software installation that serves multiple tenants. A tenant is a user, application, or a group of users/applications that utilize the software to operate on their own data set.”

Multitenancy is hard and sometimes frustrating, especially for platform administrators. The most commonly used multitenancy models are:

  • Cluster-based isolation
  • Namespace based isolation

Cluster-based isolation: Each tenant will get their dedicated cluster. This is advantageous as we can achieve better isolation and each cluster is less complex from a configuration point of view. However, imagine having tens of thousands of tenants, creating a cluster for each one of them can be very hard or even impossible to manage. Additionally, the cost especially for a Cloud based environment can be high.

Namespace-based isolation: Tenants are restricted to one or more dedicated namespaces within a shared Kubernetes cluster. By utilising this approach, we do not have to have clusters per tenant thus, less wasted resources, and cost savings but, the underlying environment could become complex.

What is vCluster?

After some digging around the cloud native landscape, vCluster came up. vCluster provides the ability to create virtual clusters which run inside a shared Kubernetes cluster, but they appear to the end user as a dedicated, standalone cluster.

vCluster Architecture, Source: https://www.vcluster.com/docs

What is today’s demo?

Today’s use case came for the development world. Developers would like to rapidly create separate clusters for testing purposes without having the fear of impacting production environments or concurrent development tasks.

We will present how easy it is to utilise Helm to create two virtual clusters with custom values within a shared Civo cluster.

Lab Setup

+----------------------+------------+-------------------------------------+
| Cluster Name | Version | Comments |
+----------------------+------------+-------------------------------------+
| civo-cluster01 | v1.28.7+k3s1| Civo 3 Node - Medium Standard |
| vcluster-dev | v1.29.0+k3s1| Defined in the `dev` namespace |
| vcluster-staging | v1.29.0+k3s1| Defined in the `staging` namespace|
+----------------------+------------+-------------------------------------+

Prerequisites

  1. Helm version ≥ v3.10.0
  2. kubectl available: Use the guide found here

Step 1: Download and export Kubeconfig — Civo Portal

As mentioned above, we already provisioned a three-node Kubernetes cluster with Civo. More information about how to create a cluster and retrieve the kubeconfig can be found here.

Export KUBECONFIG

$ export KUBECONFIG=~/demo/vcluster/civo-cluster01.yaml

Step 2: Prepare Custom values.yaml file

As we want to access the virtual clusters via a Loadbalancer IP address, we will create a custom-values-<env>.yaml file alongside with a service of type LoadBalancer . Afterwards, we will pass it as an argument during the Helm installation.

Note: The complete vCluster Helm values list can be found here.

LoadBalancer Service

cat vcluster-loadbalancer.yaml
---
apiVersion: v1
kind: Service
metadata:
name: vcluster-loadbalancer
namespace: dev
spec:
selector:
app: vcluster
release: vcluster-dev
ports:
- name: https
port: 443
targetPort: 8443
protocol: TCP
type: LoadBalancer
$ kubectl get svc -n dev | grep -i LoadBalancer
vcluster-loadbalancer LoadBalancer 10.43.141.220 x.x.x.x 443:31043/TCP 38m

More details about the LoadBalancer configuration, check out here.

vcluster-dev — Custom Values

$ cat custom-values-dev.yaml
# Checkout the HA documentation: https://www.vcluster.com/docs/v0.19/deploying-vclusters/high-availability

# Scale up syncer replicas
syncer:
replicas: 1
extraArgs:
- --tls-san=<LoadBalancer IP>

# Scale up etcd
etcd:
replicas: 1

# Scale up DNS server
coredns:
replicas: 1

# Virtual Cluster (k3s) configuration
vcluster:
# Image to use for the virtual cluster
image: rancher/k3s:v1.29.0-k3s1

# Specify an initial cluster configuration
init:
manifests: |-
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-dev
namespace: nginx-app
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-dev
namespace: nginx-app
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP

What is important in the YAML above? Firstly, we specify the k3s version and we define the extraArgs with the LoadBalancer IP address created above. What is really cool, we have the ability to deploy any required Kubernetes resources during the Helm chart installation(init.manifests). In this case, we deploy a simplistic Nginx application in the namespace nginx-app.

If Highly Availability (HA) is required, we can update the replicas of the syncer, etcd, coredns sections.

Note: Depending on the use case, we can define the Kubernetes resources below the init section as manifests or as Helm Charts.

Cluster vcluster-staging

The custom-values-staging.yaml file is similar to the custom-values-dev.yaml file apart from the details related to the staging environment. That is the LoadBalancer IP address, a different application deployment in a different namespace etc.

Step 3: vCluster Helm Installation

As we have the custom values file for the installation ready, we can proceed with the deployment of the virtual clusters.

$ kubectl create namespace dev

$ helm upgrade --install vcluster-dev vcluster --namespace dev --values ~/demo/vcluster/multi-tenant/values/custom-values-dev.yaml --repo https://charts.loft.sh --repository-config=''

Release "vcluster-dev" does not exist. Installing it now.
NAME: vcluster-dev
LAST DEPLOYED: Mon Apr 29 10:24:30 2024
NAMESPACE: dev
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing vcluster.

Your vcluster is named vcluster-dev in namespace dev.

To connect to the vcluster, use vcluster CLI (https://www.vcluster.com/docs/getting-started/setup):
$ vcluster connect vcluster-dev -n dev
$ vcluster connect vcluster-dev -n dev -- kubectl get ns


For more information, please take a look at the vcluster docs at https://www.vcluster.com/docs

Validation

$ helm list -n dev
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
vcluster-dev dev 1 2024-04-29 10:24:30.052973137 +0000 UTC deployed vcluster-0.19.5 0.19.5

$ kubectl get pods,svc,secret -n dev
NAME READY STATUS RESTARTS AGE
pod/nginx-dev-7c79c4bf97-6ln7q-x-nginx-app-x-vcluster-dev 1/1 Running 0 3m2s
pod/vcluster-dev-0 1/1 Running 0 3m26s
pod/coredns-68bdd584b4-qrv2z-x-kube-system-x-vcluster-dev 1/1 Running 0 3m2s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/vcluster-loadbalancer LoadBalancer 10.43.141.220 x.x.x.x. 443:31043/TCP 18m
service/vcluster-dev-headless ClusterIP None <none> 443/TCP 3m26s
service/vcluster-dev ClusterIP 10.43.169.7 <none> 443/TCP,10250/TCP 3m26s
service/vcluster-dev-node-k3s-demo01-49d0-a793bf-node-pool-1f89-nwlte ClusterIP 10.43.240.210 <none> 10250/TCP 3m2s
service/vcluster-dev-node-k3s-demo01-49d0-a793bf-node-pool-1f89-wib2n ClusterIP 10.43.203.185 <none> 10250/TCP 3m2s
service/kube-dns-x-kube-system-x-vcluster-dev ClusterIP 10.43.103.34 <none> 53/UDP,53/TCP,9153/TCP 3m2s
service/nginx-dev-x-nginx-app-x-vcluster-dev ClusterIP 10.43.169.94 <none> 80/TCP 3m2s

NAME TYPE DATA AGE
secret/vc-k3s-vcluster-dev Opaque 1 17m
secret/sh.helm.release.v1.vcluster-dev.v1 helm.sh/release.v1 1 3m26s
secret/vc-vcluster-dev Opaque 4 3m2s

Note: The scheduler inside the virtual cluster manages everything but when it comes to Pods, Services and Ingress creation, those are delegated to the scheduler of the parent cluster (Civo Cluster).

Step 4: Retrieve vCluster Kubeconfig

Once the virtual cluster is installed, we can retrieve the kubeconfig by decoding the configuration of the secret with the name vc-cluster-dev.

$ kubectl get secret vc-vcluster-dev -n dev --template={{.data.config}} | base64 -d > ~/demo/vcluster/multi-tenant/kubeconfig/vcluster-dev.yaml

Step 5: Interact with vCluster

As long as we have the kubeconfig available, we can export the KUBECONFIG variable and start interacting with it. Follow the commands below for validation and further interaction.

$ export KUBECONFIG=~/demo/vcluster/multi-tenant/kubeconfig/vcluster-dev.yaml

$ kubectl get nodes,ns
NAME STATUS ROLES AGE VERSION
node/k3s-demo01-49d0-a793bf-node-pool-1f89-nwlte Ready <none> 11m v1.29.0+k3s1
node/k3s-demo01-49d0-a793bf-node-pool-1f89-wib2n Ready <none> 11m v1.29.0+k3s1

NAME STATUS AGE
namespace/kube-system Active 21m
namespace/kube-public Active 21m
namespace/kube-node-lease Active 21m
namespace/default Active 21m
namespace/nginx-app Active 21m

$ kubectl get pods,svc -n nginx-app
NAME READY STATUS RESTARTS AGE
pod/nginx-dev-7c79c4bf97-6ln7q 1/1 Running 0 8m22s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-dev ClusterIP 10.43.169.94 <none> 80/TCP 7m32s

Repeat steps 3 to 5 to create the virtual cluster with the name vcluster-staging.

Conclusion

Admittedly, it was a positive user experience deploying and interacting with virtual clusters using vCluster. In this post, we used exclusively Helm and kubectl which makes it seamless to integrate with a Kubernetes add-on controller and a Pipeline for automation.

In the upcoming blog post, we will describe how Sveltos can help platform administrators control in an easy, painless and scalable manner Role Based Access Control (RBAC) in a multitenant environment.

Resources

That’s a wrap! 🎉 Thanks for reading! Stay tuned for more exciting updates!

--

--