Database Architecture — Using One Database Schema per Customer

Vladimir Zulin
Pipedrive R&D Blog
Published in
8 min readJan 9, 2020
From https://pixabay.com/

Growth and Scalability

Web applications that see significant growth will eventually need to scale and this is especially true for data-driven applications such as Pipedrive (particularly when MySQL is being used).

Dealing with this growth is something I’d like to cover in this article, specifically in relation to how we scale our customer’s data — using one database schema per customer. This process means that each customer’s unique data is stored in its own namespace (aka schema). For example, company_1234567 has its own set of tables, similar to other customer’s tables, and all the data that belongs to this customer would stored in these tables. This is currently both the oldest and most widely used approach to data scaling in Pipedrive.

To gain more context into the scenario, let’s go over history, architecture description, main benefits and drawbacks.

History

Let’s first jump back to the year 2010 and try to answer some questions as to how things progressed the way they did.

  • Why was MySQL chosen as the main storage and criteria was it based on?
  • How many customers were there at that time of the decision?
  • Why was the schema per customer sharding pattern chosen to scale?

Here are a few words from Pipedrive cofounder, Martin Tajur, regarding the decision:

MySQL was chosen because all of our first technical people were familiar with it. We knew what we needed and knew how to achieve it with MySQL. Had we chosen anything else we would have had to learn something new on the go. It would have definitely been possible but we would have been slower. We made the decision early on, day zero, basically. We had zero customers. The decision to go with separate database per customer rather than shared schemas was made when we had, I think, less than 50 customers. We saw this as a viable option to outsmart typical scaling issues of large database clusters that needed to be in sync. We also wanted resilience and better data isolation. E.g. when one server goes down, only a limited number of customers would be affected and just the thought of having critical customer data in a single table across different companies seemed pretty scary to us. Hence, we split it all out and built custom tooling to help us grow with it. It was partly convenience and partly our inner drive to focus our energy more on building a great product and not having to think too much about sharding tables, synchronizing clusters and all the other scary things that one would have ended up doing if we had chosen the route of putting everything in a huge, single database. We also knew the amount of data we would eventually accumulate would be… very big.

Current state

As previously mentioned — one database schema per customer is the oldest and most heavily used sharding pattern in Pipedrive and we use it to store and query about 95% of customers data. It runs on~280 database instances (only masters, all together 2 times more), ~185000 schemas, ~120 tables per schema , ~25 TBs of storage used (only masters, all together 2 times more).

There are several tools built out to support operational tasks such as:
1. Alter-engine — runs data manipulation and data description queries in parallel to keep schemas in sync.
2. Db-version — database schemas versioning.
3. DB maintenance tool (Backoffice) — used to set/unset database maintenance mode. Can be used to stop serving customers traffic and can be applied to 1 schema, the whole db instance, or all instances.
4. Roro/tanker — set of services to move a company schema from one database instance to another. Supports cross region migration.
5. Company-database-creator — a service that ensures we always have free schemas ready for newly registered customers. Using companies count per database instance when creating new schema via round robin algorithm.
6. Dbregistry-service — a service that provides an API to metadata database. This service consists of the company-database relations map.

Here is the illustrative description of architecture:

Microservice X makes a request to dbregistry-service for the entity regarding where company_N is located. Afterwards, the service knows which database is connected to process company_N data. Alter-engine, company-database-creator and roro/tanker services also use dbregistry-service to get databases metadata and after perform alters, migrations and new company schemas creation.

Pros and cons

Every scaling solution schema-per-customer has both its advantages and disadvantages. In this section I will go through some of the experiences we’ve gained over the years and try to highlight both the positives and negatives.

Pros

The main benefit of this approach is, of course, how easy it is to scale. Just adding new sets of database instances, making it available for company-database-creator, and having space to store and process data for thousands of new companies are all examples of the scalability.

Next is the ease of management and operations. MySQL dump and restore is pretty easy to manage. It also makes migration between database instances much more simple.

The data isolation level is also a bit higher compared to a shared schema approach. Making an error inside one schema will impact only this particular company.

Circling back to scaling, Pipedrive was originally meant for small to medium size business, and companies with large sets of data were not an issue in the past. However, the situation has changed and we now have several larges companies, but we still can host them with the existing pattern by simply moving them to a dedicated database instance (a database instance per customer). For identifying these larger companies we use automatic monitoring based on the analysis of MySQL slow logs.

It is still a manual procedure when we move them, but the upside is that 50% of slow logs belong to only 7 heavy loaded schemas. 🙂

Top 9 slow logs companies in EU

Cons

If it weren’t for the drawbacks and potential problems behind this approach, I would’ve likely never written in this article to start with.

In the ‘pros’ section, I discussed the upside(s): scalability — now, let me cover the downsides.

First of all, one database instance can store up to a 1000 companies (schemas) and the size of the companies can differ. Since we don’t separate companies based on their size, there’s no small, medium or big status on the product level. This, in turn, causes an issue with unevenly balanced shards. Databases can have different loads and there isn’t a satisfactory, automated way, to balance the load. Luckily, it hasn’t become a major issue yet, but it almost certainly will become one if we continue to add more data objects to schemas.

Secondly, one database instance performance is limited to the size of one virtual machine. We can reduce the amount of companies per instance, but in situations with very large companies, we have a resource boundary. Recently, company size has been trending upwards so it’s only a matter of time before it becomes a problem.

Also, bigger companies could require custom database configs, which means they would have configuration overrides and these overrides are usually assigned manually. This eventually leads to configuration drifts or desync.

To summarize, all the problems mentioned above will also lead to a higher operations overhead.

The second set of problems are related to the MySQL engine.

I mentioned that we have around 185000 schemas and the average amount of schemas per database instance is ~1000. Every schema has ~120 tables and ~120000 tables in one database instance. In our case, using performance schema or running PMM client lead to performance issues. PMM crashes with a high amount of metadata and the queries to the performance schema aren’t being optimized.

An additional issue is running alters to synchronize schema structures. In our current MySQL (version 5.7), the default data structure change algorithm is COPY. This means that when adding or deleting a column — the whole table will be copied in the background with a new structure and then the old table is replaced. During this time, the table is not accessible and in cases where the table is bigger, it creates IO spikes and can impact neighbors.

In the newer MySQL (version 8) — a new DDL algorithm is used. It changes the table’s structure without copying it — this is good, but our tests show that it is not being done in 100% of the cases and still, keeping so many schemas in sync is problematic. Alter can be interrupted or even fail from a database crash and then not retried. Or maybe a database restore is done by support engineer(s) and left without the latest changes. We have had issues where the schemas were out of sync and all of them had to be manually fixed.

Finally, data encapsulation — many microservices store and query data from the same schema which is an anti-pattern in the microservices world. Data should be isolated and access only provided via the API.

Conclusion

I think back in 2010, it was a brilliant decision to use only one database schema per customer as a scaling pattern. This decision, with the tooling built around it, helped to support fast growth and avoid any big performance issues. For quite awhile there weren’t any separate DBA engineers and it didn’t become a big problem. So far, this scaling pattern is the most widely used one and it works very well for 99% of the companies.

However, the current situation has slightly changed things. We see that having a common pattern has some scalability issues that are hard to address, optimize, or fix in any easy method. Additionally, it produces some overhead for the DBA engineers and has became harder to create automation without introducing product level limits for customer’s data based on their subscription.

The amount of larger companies is growing and new application features are developed and require a data storage that scales. We can still use the schema per customer pattern, but will see different problems that are, unfortunately, not so easy to solve… maybe we will introduce a new pattern that can scale without limitations and at the same time provide MySQL compatibility?

--

--