How to manage multitenancy on Kubernetes?

Vaibhav Rajput
Geek Culture
Published in
5 min readSep 13, 2021

How do you manage Kubernetes clusters in a sizable organization? Do you keep separate clusters for all teams? For all environments? Keeping all the amazing capabilities apart, once you start growing with Kubernetes, it comes with some management overhead. As your infrastructure grows, you will have to find efficient ways to manage resources, users, and access.

The right approach?

Managing different clusters per team and per environment would not be the most efficient way. You will need more professionals to manage these clusters, bear more costs for all these master nodes, and have a dependency on administrators to deploy and set up clusters for every team. This requires central management or even a central cluster!

Issues with a central cluster?

There are two major complications here
Isolation: The biggest issue here is isolation. A project’s resources should not hinder other projects
Access: Team should have access to their resources only

One single solution

Loft can help solve both these issues cost-effectively and efficiently. In the next few minutes, we’ll see how you can create isolation within the Kubernetes cluster (we are talking about more than just namespaces) and have a central place to configure it. Once the admins are done with the configuration, developers will be equipped with a self-service portal to provision their clusters, manage their apps and much more

The setup

First, we need our central cluster. For this demo, I’m using a minikube cluster that I have deployed on my local machine

Next, we need to install Loft CLI. You can find the steps as per your OS in the quickstart guide. For me, this is how it goes

curl -s -L "https://github.com/loft-sh/loft/releases/latest" | sed -nE 's!.*"([^"]*loft-darwin-amd64)".*!https://github.com\1!p' | xargs -n 1 curl -L -o loft && chmod +x loft;sudo mv loft /usr/local/bin;

Once installed, just verify the installation like so

Now we’re all set to deploy loft. Let’s begin by running

loft start

Make sure that helm version is >= 3.2 since the command running in the background uses --create-namespace flag with helm upgrade

Since I’m running minikube, this prompt appears to which I selected Yes and moved on to further put in my email id in the next prompt saying “Enter an email address for your admin user

Once the installation is complete, you will see the following logs

########################## LOGIN ############################Username: admin
Password: 72258f0b-2e60–4441–8981–4809144df1b5
Login via UI: https://localhost:9898
Login via CLI: loft login --insecure https://localhost:9898
!!! You must accept the untrusted certificate in your browser !!!#################################################################Loft was successfully installed and port-forwarding has been started.
If you stop this command, run ‘loft start’ again to restart port-forwarding.
Thanks for using loft!

Let’s take a walk

After putting https://localhost:9898 in the browser and getting past the “Your connection is not private” warning, you will see this page where you can use the admin credentials to log in.

Then fill a quick form about your profile putting in your name, role, team size and organization. Once filled, click Finish and you’re in!

Bringing all clusters together

Our first step will be connecting all the different clusters under Loft’s management. To do this go to the Cluster tab at the left.
Here you will see a cluster already present with the name loft-cluster, which is the cluster on which you just deployed loft.

Click the Connect Cluster button on the top right corner and put in a name and kube-config for your cluster as shown below.

It will then install Kiosk to your cluster which is a lightweight, pluggable and customizable multi-tenancy solution.

Apart from Kiosk, once you click on a cluster, the UI would give you a bonus feature of a very convenient single-click installation option for many other apps.

Creating virtual clusters

Now that we have all our multi-tenant clusters ready under one management, it is time to create the isolation.

The first level of isolation is done through virtual clusters (vClusters). These virtual clusters are actually K3s clusters running over your Kubernetes clusters. To know more about K3s, take a look at my early blog.

For this, go to the vCluster option in the left panel and click on Create vCluster button. Choose a cluster, give a name and you’re ready to go!
This will also create a namespace in your cluster which you can view in the Spaces section of UI.

Connecting to the vCluster

Once your vCluster is ready, you can connect it by clicking this option and getting the stated command

The command will look something like this

loft use vcluster <vcluster-name> --cluster <cluster-name> --space <space-name>

Now, go to your terminal and run

loft login <loft-address> --username <username> --access-key <access-key>

In my case, the loft-address was https://localhost:9898.
Username and access-key would depend on the user you are using to log in. Users can be created in the Users section on the left pane and once the user is logged in, they can create their access-key in the Profile section of the left pane.

NOTE: You might need to user --insecure flag in the login command since the certificates aren’t configured

Now run the loft use vcluster command you got earlier and you’re in. Now you can run kubectl commands within the vCluster.

Parting note

Loft is a really handy solution to manage multiple teams running projects on a single cluster or even clusters from multiple distributions. I covered the initial setup here, but there’s much more to this tool. You can schedule clusters to save costs, limit quotas assigned to virtual clusters, share secrets between clusters and much more.

--

--

Vaibhav Rajput
Geek Culture

DevOps working on cloud, containers, and more. Writer for Level Up Coding, The Startup, Better Programming, Geek Culture, and Nerd for Tech.