Best Practices and Considerations for Multi-Tenant SaaS Application Using AWS EKS

appfleet team
appfleet
Published in
5 min readNov 4, 2020

--

Today, most organizations, large or small, are hosting their SaaS application on the cloud using multi-tenant architecture. There are multiple reasons for this, but the most simple and straightforward reasons are cost and scalability. In a multi-tenant architecture, one instance of a software application is shared by multiple tenants (clients). These tenants share various resources such as databases, web servers, computing resources, etc., that make multi-tenant architecture cost-efficient while keeping scalability in mind.

Amazon EKS (Elastic Kubernetes Service) is one of the most popular container orchestration platforms offered by AWS, which is popularly used host multi-tenant SaaS applications. However, while adopting a multi-tenancy framework, it is also important to note the challenges that might prevail due to cluster resource sharing.

Let us delve into the best practices and considerations for multi-tenant SaaS applications using Amazon EKS in this article.

Practice 1: Separate Namespaces for Each Tenant (Compute Isolation)

Having separate namespaces remains an essential consideration while deploying a multi-tenant SaaS application, as it essentially divides a single cluster resource across multiple clients. Namespaces in a multi-tenant architecture are the primary unit of isolation in Kubernetes. As one of its core features, Amazon EKS helps you create a separate namespace for each tenant running the SaaS application. This attribute aids in isolating every tenant and its own environment within the corresponding Kubernetes cluster, thereby enforcing data privacy in the sense that you don’t have to create different clusters per tenant. This aspect eventually brings substantial cost reduction in compute resources and AWS hosting costs.

Source — https://aws.amazon.com/

Practice 2: Setting ResourceQuota on Resource Consumption

A multi-tenant SaaS application serves several tenants, with each of them accessing the same Kubernetes cluster resources concurrently. Often there would be scenarios where a particular tenant consume resources disproportionately to exhaust all the cluster resources on its own, without leaving resources to be used by other tenants. To avoid such instances of capacity downtime the ResourceQuota concept comes to rescue, with which you can set limit on the resources the containers (hosting the SaaS application) can use.

Here is an example:

apiVersion: v1
kind: ResourceQuota
metadata:
name: resourceQuotaSetting
namespace: tenant1
spec:
hard:
requests.cpu: "2"
requests.memory: "1Gi"
limits.cpu: "4"
limits.memory: "2Gi"

To get the above in context, once you set up the above configuration on Amazon EKS for resource quota, a container can only request 2 CPUs at a time while in total it can have only 4 CPUs concurrently. Additionally, it can request 1 Gi memory at a time, with the maximum memory limit defined as 2Gi. With this you essentially limit the usage of resources while ensuring that a particular tenant does not end up consuming all resources.

Practice 3: Network Isolation using Network Policies

By default, the Kubernetes production cluster allows namespaces to talk to each other. But if your SaaS application is running in a multi-tenant architecture, you would like to avoid that to bring in the isolation between different namespaces. To do that, you can use tenant isolation network policy and network segmentation on Amazon EKS. As a best practice, you can install Calico on Amazon EKS and assign network policies to pods as shown below.

For reference, the following policy allows traffic only from the same-namespace while restricting other traffic. The pods with label app: api in tenant-a namespace will only receive traffic from same-namespace. The communication of tenant-a with other tenants and vice-versa is being denied here to achieve network isolation.

apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: same-namespace
namespace: tenant-a
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- ingress
- egress
ingress:
- from:
- namespaceSelector:
matchLabels:
nsname: tenant-a
egress:
- to:
- namespaceSelector:
matchLabels:
nsname: tenant-a

Practice 4: Storage Isolation using PersistentVolume and PersistentVolumeClaim

As opposed to a single-tenant framework, a multi-tenant framework requires a different approach to managing application storage. On Amazon EKS, you can assign and manage storage for different tenants seamlessly using PersistentVolume (PV). On the other hand, a storage request sent by a tenant is called referred as PersistentVolumeClaim (PVC). Since PVC is a namespaced resource, you can bring isolation of storage among different tenants easily.

In the example below for tenant1, we have configured PVC with ReadWriteOnce access mode and storage space of 2 Gi.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-storage
namespace: tenant1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

Practice 5: Amazon IAM Integration with Amazon EKS for Setting RBAC

Just like any other AWS service, EKS integrates with AWS IAM to administer Role-Based Access Control (RBAC) on a Kubernetes cluster. Through the AWS IAM authenticator, you can authenticate any tenant namespace to a Kubernetes cluster. To use IAM on multi-tenancy, you need to add the tenant’s (user) IAM role inside aws-auth configmap for authenticating the tenant's namespace. Once the authentication is successful by AWS IAM, defined Role (namespaced resource) and/or ClusterRole (non-namespaced resource) for a particular tenant's namespace will be implemented on the cluster. By provisioning ClusterRole and Role policies on the cluster, you can adopt a hardened security posture on a multi-tenant SaaS application.

Practice 6: Manage Tenant Placement on Kubernetes nodes

Similar to the default Kubernetes service, Amazon EKS provides Node Affinity and Pod Affinity for managing tenant placement on Kubernetes nodes through which you can use taint and toleration. Using Node Affinity, you can decide on which node you want to run a particular tenant’s pod. On the other hand, using Pod Affinity you can decide if tenant 1 and tenant 2 should be on the same or on different nodes.

With the command below, no pods will be scheduled on node 1 until it matches the toleration arguments, while the key value (client) must be tenant1. With this method, you can run pods on a particular node just for a particular tenant.

kubectl taint nodes node1 client=tenant1:NoSchedule

This is how a pod configuration looks like:

apiVersion: v1
kind: Pod
metadata:
name: ubuntu
labels:
env: prod
spec:
containers:
- name: ubuntu
image: ubuntu
imagePullPolicy: IfNotPresent
tolerations:
- key: “client”
operator: "Equal"
value: “tenant1”
effect: "NoSchedule"

Conclusion

That was all about the best practices and considerations for running a multi-tenant SaaS application on Amazon EKS. A multi-tenant SaaS application on EKS provides you with multiple possibilities for compute, network, storage isolations, while keeping the workflow secured. Go ahead and try out these practices and let us know your experience.

Alternately, if you have any additional best practices to share, do let us know.

--

--

appfleet team
appfleet

appfleet is a cloud platform offering edge compute for containers and web applications.