Definitely interesting. I’m in a situation where I am considering with Postgres for a start-up SaaS. My goals are to keep the data isolated, as I’ve had less than pleasant experiences with multi-tenancy using a single database. (data leakage risks, performance woes /w large tables, and migration/maintenance woes /w things like new index creation across large tables.) I like the per-schema approach because to me it is scale-able. I have designed a master database which tracks all tenants & servers, where each tenant is designated a server and schema, then each server will hold a number of schema. Tenant schema are versioned, and the intention is to keep all tenants on a common db schema. (No customization) So as the application scales and I schedule schema updates, my application landing page will hand off to application instances who’s versions match the tenant’s schema. For instance when I go to upgrade from v1.0 to 1.1 there will be v1.0 and 1.1 application servers (temporarily) Tenant schema will be rolled over to v1.1 over the course of however many days/windows where they will be directed to v1.0 compatible app servers until their schema is upgraded, then pointed at v1.1. When the roll out is complete, the v1.0 servers are dropped. I’m using a Snowflake derivative for ID generation to facilitate the fact that some data will sync between tenants. In those cases the data is marked with a tenant ID but it’s not a FK/PK. The tenant ID is solely there to represent the data owner. I do have to enforce things like ensuring that tenant-to-tenant copying is limited to same-schema version and the like, but these are exception cases, not the norm. My thoughts are that this will work out in my case because while I hope to grow the system out into 100k+ users, the schema complexity and data size per tenant will remain relatively small. (thousands or less records on avg.) If a tenant gets too big I can migrate them off to a different server. Should the service really take off I’ll look to keep a migration path open.