Migrating Metabase application database

Amom Mendes
Blog Técnico QuintoAndar
6 min readOct 1, 2020
Photo by Ray Hennessy on Unsplash

At QuintoAndar, data availability is crucial to our core business, providing data insights for our stakeholders, from operations and customer service teams to C-level decision making. Alongside data sources and data pipelines, our data exploring and visualization tools play a vital role in this ecosystem. Metabase is an analytics open-source project written in Clojure and one of our analytics tools.

Metabase architecture is relatively simple, deployed in a jar file, and an application database (local H2, MySQL or Postgres). Despite the simplicity in the architecture, we experienced some performance issues (e.g., stuck queries, OOM pod kills, users claiming about slowness), which lead us to focus on Metabase infrastructure improvements to provide scalability and more strict SLAs.

One such action was to migrate the Metabase application database (hereafter, Metabase DB) from a shared RDS MySQL instance to a dedicated RDS Postgres. The reasons behind this database migration are that the deployment of Metabase DB in MySQL was a legacy architecture and now Postgres is our standard relational database. Moreover, the fact that Metabase DB was shared with another application was causing some noise in our understanding of Metabase DB performance.

Database migrations aren’t a mere data migration, because you need to understand how to migrate the schema properly, how the application will handle this new database, including data types, data conversions, connection, relationships between objects, constraints, and so on. Here, we want to share our discovery in the process to migrate Metabase DB. Hopefully, this can be useful to people with the same demands.

Where should I start? The tools!

As Metabase doesn’t have a Clojure implementation to migrate between relational databases, we needed to understand the best way to migrate.

There is a myriad of tools to migrate databases, both on-prem and in the cloud. However, there is also a trade-off between investing time in a tool and getting things done, meaning that if you waste a lot of time understanding, installing (if needed) and configuring to execute sporadic migration tasks, this tool possibly is not the best choice.

Since then, we tested just two options: PgLoader (suggested in a Metabase thread by @flamber) and AWS Database Migration Service (DMS). The first is a famous Lisp tool to migrate data from relational databases to Postgres and the second one is a natural option for us since we’re using RDS instances.

After some tests with these tools (you can see PgLoader scripts here and terraform files for DMS provisioning here), we chose to perform the migration using PgLoader. The main factor that leads us to choose PgLoader was that it seemly handles objects dependency (e.g., foreign keys) better than DMS**. Moreover, another relevant point is that PgLoader is an open-source project and has a beautiful Docker image 👍!

Schema Migration

Both PgLoader and DMS have schema migration features, however, after performing the tests above, we concluded that leaving this responsibility to the migration tool was not a good idea, because we would have extra work to validate if all objects (primary keys, foreign keys, sequences, and so on) were correctly migrated.

Additionally, Metabase uses Liquibase to apply schema changes between versions (called changesets in the Liquibase language) and for distinct databases. In our case, the Liquibase changesets applied for MySQL were not exactly the same applied to Postgres. But don’t worry, you’re not totally alone in this task. Metabase has a utility tool to generate a SQL script with all changesets that need to be applied in your database (e.g., an empty database). You can create this script by running the following command:

export MB_DB_CONNECTION_URI="postgres://POSTGRES-HOST:5432/metabase?user=METABASE_USER&password=PASS&currentSchema=METABASE_SCHEMA"java -jar metabase.jar migrate print

You can execute the generated SQL script in your database to create all tables, indexes, and constraints. You can also explore the table databasechangelog to see the changesets applied (151 changesets until 0.34.3 version to Postgres, see this liquibase yaml file). Well, now we can focus on migrating only the data, rather than all schema objects.

Data Migration Heads-up

With all objects created in the new Postgres database, we focus on data migration. In the planning/tests phase we identified some issues to load the data using PgLoader:

  • JVM Heap space: it is a known issue of PgLoader and the workaround to this is simply break the loading process, i.e., load heavy tables separately.

Solution: In our case we get the Top 5 biggest tables and load each of them first (you can see scripts order here):

Biggest Metabase tables
  • Data integrity problems: we discovered that some columns in our MySQL Metabase DB were “nullable” (and in fact, we had a few null records); however, in the new schema such columns are not “nullable”, causing constraint violation errors.

Solution: Then, we dropped these null constraints in these columns to load data (since those columns were not critical to Metabase) and fixed these null records with the proper values. For example, some data sources from the table metabase_database had missing values for the column metadata_sync_schedule, that corresponds to the cron expression for syncing tables metadata. We added the correct cron expression and fix the load for this table.

Migration Planning

After the tests cited above, we planned our production migration as expressed in the following steps:

  • Step 1: stop service. Metabase users could create questions, dashboards and etc during data migration. These data could be lost if they created such objects while we’re migrating. Sure, a blue/green strategy with a CDC tool could avoid the downtime, however, in our case, this downtime was not critical.
  • Step 2: apply the SQL Liquibase script to create all objects in the new database (see Schema Migration above). And solve other constraint violations, as commented in Data Integrity Problems above.
  • Step 3: migrate the data itself using PgLoader scripts (see how to execute scripts here)
  • Step 4: start service setting Metabase DB to the new database (Postgres, in our case)
  • Step 5: watch Metabase startup logs. As we didn’t migrate the table data_migrations, Metabase performs some minor changes in the database, (e.g., dropping deprecated tables) and you can see it in the startup. In our case, Metabase is deployed in a Kubernetes cluster and integrated into our CI/CD stack. Then we simply need to change configuration files and merge in our Github repo.
  • Step 6: if all worked well, you can do user tests, creating some questions, dashboards, data source connections, etc… We created this clojure app to test Metabase using its API.

Congrats, you’ve migrated Metabase DB!

Well, and what about rollback? If you would need to solve problems other than related here, just restart the service setting the Metabase DB to your old database (MySQL in our case).

Conclusion

Although Metabase doesn’t have a utility tool to migrate its database between RDBMS (e.g., MySql to Postgres), we can use some of the available tools to migrate only the data and use Metabase utilities to create objects in the new database, preserving data integrity as required by Metabase application. In our scenario, PgLoader was very useful because of its flexibility and easy-to-use way. However, some CDC tool (as DMS), could be more suitable for use cases that downtime is not allowed.

I would like to thanks Rafael Ribaldo (Ribz) and Kenji from Data Infrastructure Team (DIE☠️) for the great contributions to the article and migration itself.

** For example, DMS requires to disable constraints while loading data to avoid constraint violations errors

--

--