Kubernetes Cluster Multi-Tenancy

Rui Grafino
Marionete
Published in
5 min readApr 5, 2021

Multi-Tenancy refers to the mode of operation of software when more than one instance operates in the same shared environment.

In Kubernetes this means that a single Kubernetes cluster can have multiple tenants, and this can also be simply described as the same cluster having multiple customers, users, teams within a company or other relevant context. For the tenant, the experience is as if each one of them is using their own cluster to work and deploy workloads.

The industry best practice for handling multiple tenants is to assign each tenant a separate namespace.

Why Multi-Tenancy it’s important?

We must ensure that each tenant or namespace is sufficiently isolated, and there are several different reasons to do that:

· Resource’s competition.

· Managing costs.

· Workload’s security.

· Market regulations,

· Privacy and data protection.

· Security and vulnerabilities.

Soft and Hard Multi-tenancy

Soft multi-tenancy assumes the users to be non-malicious and focus on minimising accidents and managing the incidents if they happen.

A good example is when there is a cluster for development and one or several teams within the same company use the same cluster for their daily tasks, each on a separate namespace, sometimes they even collaborate and there is no hard need to impose a full isolation. Usually used for internal customers of a company in non-production scenarios.

Hard multi-tenancy assumes the users to be malicious, potentially dangerous to each other therefore complete isolation between namespaces is required.

For example, when a company provides a service to different customers, each one with their own privacy and security requirements or workloads. Usually used for external customers of a company and for production workloads.

Single tenants Vs Shared namespaces

When using a multi-tenancy operating model, it doesn’t mean that a user, customer or team can strictly only have access to one namespace, or even that a namespace can’t be shared by more than one tenant. More than one namespace can be assigned to tenants.

We can have multiple scenarios where different levels of isolations are required:

- One tenant with one isolated namespace.

- One tenant with one or more isolated namespaces.

- Multiple tenants sharing a namespace with other tenants.

Single Tenant namespace is where you should run services or applications that don’t need to be accessed from other namespaces.

A Shared Namespace is a block of logical isolation where you should run services or applications that need to be accessed by services or users from other namespaces.

Multi-Tenancy objects to enforce isolation

To enforce isolation and also as a best practice you must make use of some Kubernetes primitives and configure them to give control of who accesses which namespace, which network traffic within the same namespace and between different namespaces is allowed, and which number of resources a namespace can use, be either CPU, Memory or Disk.

  • RBAC — Role Based Access Control
  • Network Policies — Isolate traffic between namespaces
  • Resource quotas — To define each namespace resources

Hierarchical namespaces

The Kubernetes special Interest group for multi-tenancy has some projects incubating and one of the most interesting and promising ones is the ability of a namespace owner to split their resources into smaller chunks.

The Hierarchical Namespace Controller.

Hierarchical namespaces make it easier to share your cluster by making namespaces more powerful. For example, you can create additional namespaces under your team’s namespace, even if you don’t have cluster-level permission to create namespaces, and easily apply policies like RBAC and Network Policies across all namespaces in your team.

Other relevant best practices in Multi-tenancy context

· Always label your workloads and namespaces.

· Monitor and set the limit of resources per workloads and namespace.

· Prevent Usage of HostPath volumes.

· Principle of least privilege.

· Define clear network policies.

References:

--

--