Spring Boot Multi-tenant Architecture Overview

Kostiantyn Dementiev
8 min readDec 26, 2022

Developing and maintaining Spring Boot multi-tenant applications can land lots of developers in great trouble.

This article is about creating the Spring Boot multi-tenant project (using database-per-tenant approach) with the ability to add new databases in application runtime.

As a demonstration of the project with high security requirments medical lab simulation will be created. Technology stack: Spring Boot + Spring Data JPA & Spring Data JDBC, for test purposes Testcontainers + Database Rider + p6spy will be used.

Table of Contents

  • General overview of concept
    – Lexicon
  • Multi-tenancy implementation approaches:
    - Shared schema
    - Schema-per-tenant
    - Database-per-tenant
  • General overview of example project
    - Project overview
    - Database schemas
    - Infrastructure setup
  • Database workflow
    - Dynamic database creation
    - Dynamic database choosing
    > Spring Data JPA
    > Spring Data JDBC
  • Integration tests
  • Summary

Multi-tenancy concept overview

The multi-tenancy principle allows several users to share computing, networking, and storage resources without ever having access to one another’s data. Each client (referred as tenant) could receive a customized version of a multi-tenant application, but its overall architecture and core features remain the same. Multi-tenancy is a tactic that Software-as-a-Service (SaaS) providers frequently employ.

Lexicon

  • Tenant — client, served by one or multiple application instances
  • Single-tenant application — type of a solution, where all application instances are serving only one tenant
  • Multi-tenant application — type of a solution, where application instances are serving 2 or more tenants

Multi-tenancy implementation approaches

In general there are 3 most commonly used types of multi tenant architecture implementation:

  • Shared schema
  • Schema-per-tenant
  • Database-per-tenant

Shared schema

This is the most popular approach. Using it, the data from all tenants is stored in one common schema. To separate data from different tenants in some tables we can specify the column which will identify the tenant:

Advantages:

  • Best development simplicity (this is well-known approach for most of the developers, used in the majority of applications)
  • Low infrastructure cost (we have to pay only for one database instance)

Disadvantages:

  • Worse database performance (because a lot of data is stored in same tables)

Schema-per-tenant

Using this approach, we can store tenant’s data in separate schemas on shared database instance:

Advantages:

  • Better database performance
  • Better data separation

Disadvantages:

  • More complicated in development

Database-per-tenant

This approach provides the best separation of tenant’s data — we can store it in different databases or even use separate db instances:

Advantages:

  • Also better database performance
  • Best data separation (awesome when security requirements are strict)

Disadvantages:

  • The most complicated in development
  • In case of using different database instances — the highest infrastructure price

General overview of example project

Project overview

For a multi-tenancy demonstration I’m going to build a medical lab simulation, where information about research for different companies will be stored in separate databases (due to high security requirements).

Application is going to be multi-module, and 5 modules will be created:

  • auth — there will be stored all the logic needed for authentication and authorization flows
  • tenant-management — module for operating tenants and their databases
  • lab — module for demo business logic
  • commons — module for storing some parts of project which are going to be needed in few different modules
  • application — main module of application, general configuration will be stored in it

Database schemas

For storing info about tenants and users I’ll use main database instance with such schema:

And for storing data for each tenant databases with quite simple schema will be used:

DB workflow

Dynamic database creation

For providing ability of dynamic database creation, we have to follow those steps:

  1. We will need some configurations, which are going to be stored in application.yml file in resources folder:

2. Actutally for database creation, we need to implement TenantDao:

Dao here stands for “Data Acess Object”.

3. After db is created, we need to enable migrations. For this purpose, lets add LiquibaseService

Here we have specified paths for main and tenant db migrations and couple of methods for enabling them.

Example of a migration:

databaseChangeLog:
- preConditions:

- changeSet:
id: createTenantsTable
author: konstde00
changes:
- createTable:
columns:
- column:
name: id
type: bigint
- column:
name: name
type: varchar
- column:
name: db_name
type: varchar
- column:
name: user_name
type: varchar
- column:
name: db_password
type: varchar
- column:
name: creation_status
type: varchar
schemaName: public
tableName: tenants

- changeSet:
id: createTenantsIdSequence
author: konstde00
changes:
- createSequence:
dataType: bigint
minValue: 2
incrementBy: 1
schemaName: public
sequenceName: tenants_id_seq

and a simple example of a changelog file:

databaseChangeLog:

- include:
file: changelog/Tenants.yml
relativeToChangelogFile: true
- include:
file: changelog/Users.yml
relativeToChangelogFile: true
- include:
file: changelog/UserRoles.yml
relativeToChangelogFile: true

# Constraints

- include:
file: changelog/TenantsConstraints.yml
relativeToChangelogFile: true
- include:
file: changelog/UsersConstraints.yml
relativeToChangelogFile: true
- include:
file: changelog/UserRolesConstraints.yml
relativeToChangelogFile: true

Also, for creating database connections we need to implement ConnectionService:

4. For creating admin user for our tenants we have to implement UserDao, UserRepository and UserService:

5. For operating migration of Main and Tenant datasources, lets add DataSourceConfigService:

6. To provide an opportunity of choosing different datasources we have to add DataSourceRoutingService:

For this service important to explain the concept of target and resolved data sources:

In AbstractRoutingDataSource exist 2 important properties:

@Nullable
private Map<Object, Object> targetDataSources;

and

@Nullable
private Map<Object, DataSource> resolvedDataSources;

Without reflection-api we can set only target datasources (using setTargetDataSources method), but when we switch between different data sources we use resolvedDataSources map. To update this property, we can use afterPropertiesSet method from AbstractRoutingDataSource:

@Override
public void afterPropertiesSet() {
if (this.targetDataSources == null) {
throw new IllegalArgumentException("Property 'targetDataSources' is required");
}
this.resolvedDataSources = CollectionUtils.newHashMap(this.targetDataSources.size());
this.targetDataSources.forEach((key, value) -> {
Object lookupKey = resolveSpecifiedLookupKey(key);
DataSource dataSource = resolveSpecifiedDataSource(value);
this.resolvedDataSources.put(lookupKey, dataSource);
});
if (this.defaultTargetDataSource != null) {
this.resolvedDefaultDataSource = resolveSpecifiedDataSource(this.defaultTargetDataSource);
}
}

in this method we iterate through target datasources and update resolved datasources using them. So in our DataSourceRoutingService to update list of available datasources we need only to call this 2 methods like this:

public void updateResolvedDataSources(Map<Object, Object> dataSources) {

setTargetDataSources(dataSources);

afterPropertiesSet();
}

7. Then lets add TenantMapper:

8. Next, we have to add TenantService. It will be used for performing the majority operations with “Tenant” entity:

Dynamic database choosing

  • Spring Data JPA

Firstly lets add the following dependencies to the pom.xml file:

In my example app I use PostgreSql, this dependency needs to be added:

Implement DataSourceContextHolder:

Then we need to add DataSourceRoutingService:

Next, create DataSourceConfig:

Then, we need to implement TenantsRoutingFilter

Finally, lets add TenantController with basic CRUD operations with tenants:

  • Spring Data JDBC

The idea of using Spring Data JDBC here is pretty simple and comes from Strategy pattern: we will operate multiple Dao-classes, each of them will be needed to access data in datasource of a particular tenant. We will also implement Dao-Holders, which will be used for operating those Dao-classes, and use them every time we will need an access to DB of particular tenant.

Firstly, let’s add following dependency:

Then implement AbstactDao, the parent for all other Dao classes:

Then create ResearchDao, the ‘concrete Dao’ class for operating Researches:

After that we will need to create AbstractDaoHolder and ResearchDaoHolder, where methods for operating ResearchDao instances will be created:

Then we have to make some DataSourceRoutingService: add a

Map<String, AbstractDaoHolder> daoHolders

field and a method

public void updateDaoHolders(Map<Object, DataSource> dataSources) {

daoHolders.forEach((key, value) -> value.addNewTemplates(dataSources));
}

to update Dao holders, so now it looks like this:

In this way we’ll have an ability to use a Dao class for new datasource as soon as it’s created

Integration tests

In this section I will show how to write convenient integration tests for persistence layer using Spring Boot, Testcontainers, DbRider, Datasource Proxy.

Firstly, lets add required dependencies to our pom.xml file:

<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers-bom</artifactId>
<version>1.15.1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>junit-jupiter</artifactId>
<scope>test</scope>
</dependency>

<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>postgresql</artifactId>
<version>1.17.6</version>
</dependency>
<dependency>
<groupId>com.github.database-rider</groupId>
<artifactId>rider-core</artifactId>
<version>1.35.0</version>
<scope>test</scope>
</dependency>

<dependency>
<groupId>com.github.database-rider</groupId>
<artifactId>rider-spring</artifactId>
<version>1.35.0</version>
</dependency>
<dependency>
<groupId>com.github.gavlyukovskiy</groupId>
<artifactId>p6spy-spring-boot-starter</artifactId>
<version>1.6.2</version>
<scope>test</scope>
</dependency>

First of all, we will create AbstractApiTest class and hide most of configurations there. So our tests will look as follows:

There is a great tool Database Rider for managing datasets. It uses another library DbUnit as the main engine and makes the configuration extremely easy. There are a lot of options for configuration depending on your environment, but in the case of JUnit5 and Spring Boot all you need to do is to place @DBRider annotation for your test and that is it (for more information see the official documentation — luckily the documentation is pretty good).

After that, you can place @DataSet annotation on your class/test method and use the DbUnit dataset in a preferable format (YAML, XML, JSON, CSV, XLS formats or even your own Java class). In our case, it will be YAML:

researches:
- id: '0'
name: First research name
description: First research description

Moreover, there is an ability to specify data which is expected to be in DB after some actions performed in the test. This could be done with @ExpectedDataSet annotation.

If we don’t need to check some columns (for example auto-generated ids or some timestamps) we could specify them in ignoreCols property like this:

@ExpectedDataSet(
value = {"datasets/create_research/expected.yaml"},
ignoreCols = {"id"}
)

Next, let’s see the content of the base abstract class:

Now, let’s think about where we will get the database for testing. The best practice is to keep the test environment as similar to the production environment as possible. So we will use the same database as in production (PostgreSQL in our case).

Likely we have Docker that can bring us almost any external dependency for testing. We will go further and use Testcontainers library that facilitates running docker containers directly from our tests.

Also, Testcontainers provides nice wrappers for many popular products (including PostgreSQL, MySQL and some other databases). Now we can create DatabaseContainerInitializer as a custom Spring initializer:

After our container has been up and running and main database has been created, we need to create a test tenant database. For this purposes let’s create TestTenantConfig and implement SmartInitializingSingleton:

Here we have specified a flow of test tenant database creation and a tenant datasource bean, which is used in all tests in @DbRider annotation:

@DBRider(dataSourceBeanName = "tenantDataSource")

Also in some cases could be extremely important to see which SQL queries were executed. For this purpose we can use datasource proxy (p6spy proxy in our case) that will log queries nicely like this:

Some-data-point  INFO 2702 --- [           main] p6spy                                    : #1672006512538 | took 2ms | statement | connection 23| url jdbc:postgresql://localhost:55138/?loggerLevel=OFFtenant
insert into researches (name, description, id) values (‘Demo name’, ‘Demo description’, 1);

It’s enough to add required dependency, all other configuration things will be done by this library.

Hope you enjoyed this part and let me know if you have any comments or suggestions.

Summary

To summarize, I have to mention that we have created a simple demonstration of a multi-module multi-tenant application with database-per-tenant approach.

Source code of the application and simple instructions of it’s launching are available in the github repository: https://github.com/konstde00/multitenancy_overview

Hope this will help you enjoy your work!

--

--