Serverless architecture for multi-tenant SaaS
How to break the tradeoff between strong data separation and low cost per tenant
Architectures for multi-tenant SaaS aim for two conflicting goals:
- Keep costs per tenant low. This includes the operating costs for the resources, e.g. AWS bill, and the costs for managing the setup.
- Provide separation between tenants. This includes strong data separation as well as flexibility for tenant-specific configuration like SSO, tenant-specific password policies, and the likes.
Costs are typically kept low by sharing as many as possible resources across tenants. Separation requires exactly the opposite: avoid sharing resources between tenants.
For us, this tradeoff came to a head when we wanted to go freemium while enterprise customers brought more and more demanding requests for data separation and tenant-specific configuration.
Pre-serverless architecture
We had the following architecture in place when the need for freemium came up:
- Data was stored in a Kafka cluster, which acted also as the messaging backbone
- Business functionality was provided by a fleet of microservices hosted on a Kubernetes cluster
- Various AWS services provided adjacent functionality, e.g. files were stored S3 and emails sent via SES
Running Kubernetes and Kafka clusters added so much complexity and cost that it wasn’t feasible to run tenant-specific clusters. Instead, we ran one shared Kubernetes and one shared Kafka cluster for all tenants.
Data was separated between tenants solely via encryption. Although this worked, it turned out to be highly complex (and thereby error-prone). Certain things proved to be particular challenging, for example, preventing tenant-specific data to mix in logs and backups from shared systems.
Serverless to the rescue
Sooner or later we had to face the facts: Keeping the shared Kubernetes/Kafka clusters would make it increasingly hard to serve enterprise customers. Running separate Kubernetes/Kafka clusters would immediately kill our freemium plans.
Time for a fresh start. We had to split the tenant resources AND dramatically reduce costs per tenant.
We achieved this by splitting the tenants and by replacing the data and business functionality layer with serverless technologies:
- We execute business functionality in Lambda functions instead of Kubernetes microservices.
- We store data in DynamoDB instead of Kafka.
All resources for a — now split — tenant are placed in a dedicated AWS account — offering strong tenant separation. Each AWS account has therefore its own instances of DynamoDB tables and Lambda functions.
To keep the multitude of AWS accounts manageable, we added centralized deployment infrastructure, which provisions the tenant-specific environments.
Enable freemium with partitions
Moving each tenant into a separate AWS account solved the first part of the puzzle: We can offer strong tenant separation. Yet, the cost per tenant is still way above $0, making it unsuitable for a freemium model.
We therefore added the concept of a “partition”. Each partition is deployed into a dedicated AWS account and can hold multiple tenants. This means we can have many cheap tenants within one partition, which are strongly separated from all tenants in other partitions.
We add all our freemium tenants into one partition, bringing their marginal cost down to ~$0. Enterprise tenants are placed in dedicated partitions, giving them strong separation.
Results
The change of architecture achieved our two main goals: We can offer much stronger separation to enterprise customers and the marginal cost for freemium tenant is very close to $0.
We reaped also some unexpected benefits:
- Our overall AWS bill dropped by 91% because we save the high base cost of running — lowly utilized — Kubernetes and Kafka clusters.
- Our system uptime improved because most operational work is now “outsourced” to AWS.
Happy coding!