Migrate your MySQL database to Postgres with Supabase, Hasura & pgloader

Aditya Mishra
5 min readMay 4, 2022

--

Background

Recently, we decided to rearchitect our backend & migrate to a new stack. We realised our development velocity & ability to ship out different features was not as per the standards we desired.

After a month of analysing the business requirements, future product plans & spiking we arrived at the stack described in the picture below. All the team members seemed pretty satisfied & we envision ourselves to commit to it for atleast the next 2 years.

Backend Architecture

Self-hosting

Due to the nature of the task at hand, it was imperative for us to self-host our database. Benefits of self-hosting include -

  1. Pricing: Self-hosting is effectively free forever
  2. Portability: No vendor lock-in. Letting a software vendor handle your business-critical data wasn’t an option.
  3. Full Control over data: When you are self-hosting your software, you have full control over your data.
  4. Customisation: On-premises software that is open source gives you the flexibility for customisation. You can add or customise code to make it meet your organisation’s needs.
  5. Integrations: You can easily integrate with other services & extend its functionality

Hosting your own database

For our self-hosting requirements, we needed something that was open-source & can easily be self-hosted. Supabase positions itself as “The Open SourceFirebase Alternative”. It’s core offering includes -

  1. Postgres Database
  2. PostgREST for generating APIs
  3. GoTrue for handling user registration and authentication
  4. Storage for managing files in S3 buckets or other S3-compatible systems
  5. Kong as api gateway
  6. Supabase dashboard

Clone the repo

gh repo clone supabase/supabase

Modify docker compose file

docker/docker-compose.yml contains all the services that supabase provides. Since, we didn’t need PostgREST & Storage services, we removed them

Alternatively, you can directly use their postgres docker image

docker-compose.yml file

Modify the environment variables

Copy the contents of .env.example to .envfile in the same directory & update the secrets/keys/password.

I use vscode-random extension to generate random secrets

Update POSTGRES_PASSWORD

Add Hasura GraphQL Engine

The Hasura GraphQL engine makes your data instantly accessible over a real-time GraphQL API. Just point to it to your database & it will instantly provide GraphQL APIs 🚀

Update the docker-compose file in previous step to include Hasura’s GraphQL Engine.

Note that PG_DATABASE_URL points to the postgres database from previous step

Running them all

Run docker-compose up -d to run all the services. If successful, you should all the services running properly

Postgres Database:

Supabase studio: http://localhost:3000

Hasura Console: http://localhost:8081/console

Migrating MySQL to Postgres

pgLoader is an open-source database migration tool that aims to simplify the process of migrating to PostgreSQL. It supports migrations from several file types and RBDMSs — including MySQL and SQLite — to PostgreSQL.

Install pgloader

Create database

Inside postgres, create a new database with desired name {PG_DBNAME}

Migration

Migrate your local MySQL Database to PostgreSQL locally

Migrate CRUD APIs

Our backend had 400+ APIs; about 300 of which were CRUD APIs. Hasura provides queries & mutations for CRUD operations on top of your database schema & allows to convert them to REST APIs.
As a result, we decided to move the existing apis to the graphql layer

Track all tables & relationships

  1. Connect to database from the GraphQL console’s data tab
  2. For the database created in previous tab, click on Track All tables
  3. Do the same for all relationships as well
  4. Implement the backend logic as queries & convert them to GET apis
  5. Similarly, you can convert other operations into mutations & host them as POST apis

Refer below video for more details -

This way we were able to move our 60% of the apis to graphql

(Additional) Routing graphql queries via Kong

As per the architecture diagram in the first section, all the graphql queries must be routed through Kong Gateway.

In order to do that, we need to add GraphQL service to the list of Kong services. Read this excellent docs by kong to understand in detail the concept of services & routes in Kong

Edit the kong configuration file located at — docker/volumes/api/kong.yml

Adding graphql as a service to Kong
  • URL — URL at which the GraphQL service is running. Remember, we named our Hasura container graphql & by default it runs on port 8080. As a result, Kong can communicate with Hasura at - http://graphql:8080/v1/graphql
  • ROUTES — Routes determine how requests are sent to their Services after they reach Kong Gateway. Here, we are saying any request that Kong receives at /v1/graphql should be redirected to Hasura service running at URL described above
  • PLUGINS — There are couple of plugins namely for CORS & Key Authentication. There are many community plugins available here

Now any request to graphql service is routed via Kong. Since we added Key Auth plugin it needs an apikey in the header, use ANON_KEY from environment file as its value.

Conclusion

Running postgres with hasura in a dockerised environment is a great solution.

Combining this with Gotrue’s authentication, Hasura’s authorisation & Kong is a powerful combination. It can act as a backend stack for most common (& advanced) use cases.

--

--