Migrating Your MySQL Application from PlanetScale to Turso
Introduction
As an indie-hacker with no budget, one of the key factors when choosing a database for my project is its free tier option. And since time is the only currency that I can invest, I will usually have a go-to solution that I am already familiar with and that will help me get from idea to market as fast as possible.
Until recently, for me, this database solution was PlanetScale, a managed MySQL database that saves me time and effort. It got my products to market and I didn’t have to worry about payments until I had a product market fit and enough scale to justify an expense. But, PlanetScale recently announced the termination of its Hobby plan, effective on April 8th.”
Looking for an Alternative
Since I already use MySQL (and deployed on Vercel Edge), an obvious alternative was TiDB Serverless. It’s a serverless database compatible with MySQL, offering free clusters with a generous free tier. I checked if this option is viable for me, but I use DrizzleORM in my code base, and at the time of this writing, it did not support TiDB, so I realized that the effort of migrating to TiDB would be a full refactor of my database access layer. This is not an option for me.
Other managed databases alternatives that looked interesting were Xata — a serverless data platform for PostgreSQL, and Turso — a SQLite managed DB at edge. Both have DrizzleORM integration, but since Turso is basically a geo-replicated SQLite db, I really like the idea of pairing my application with a replica of the DB using Embedded Syncs. But at this time, I only need the Edge solution, and since Turso offers 3 regions in their free tier, I realized that I will have a latency win here, so I decided to go with it.
Now let’s get to the technical part.”
Migrating from PlanetScale to Turso in 5 minutes
Prerequisites
- Access to Bash/Zsh
- pscale CLI installed
- mysql2sqlite installed
- Turso cli installed
Step 1: Exporting Data from PlanetScale
You can use mysqldump to dump all your data, but to avoid the 100K rows limit, you can use the PlanetScale CLI:
> pscale database dump [db-name] [branch]
This command creates a new directory that includes schema and data files with a .sql suffix.
Here is a sample of what it looked like for AI Collection
Step 2: Creating a local SQLite DB
Before you use mysql2sqlite to generate the SQLite local db, there are some adjustments that needs to be made on the SQL queries.
In my case, I prefered to combine all the .sql files into one big file, and then I needed to fix the double quotation mark encoding in the sql queries.
To concat the files I ran:
> for f in *.sql; do cat < $f >> all.sql ; done;
Then to fix the double quotation mark encoding:
> perl -pi -e 's/\\"/""/g' all.sql
And Finally, create the local SQLite DB:
> ./mysql2sqlite all.sql | sqlite3 my_sqlite.db
You can check the validity of your data by querying it using:
> sqlite my_sqlite_db
Step 3: Upload the Data to Turso
If you already have an account and the Turso CLI installed, then run:
> turso auth login
And then, create a new DB on Turso from the local SQLite DB:
> turso db create [db-name] --from-file my_sqlite.db
To verify that the data was uploaded correctly, run:
> turso db shell [db-name] "select count(*) from <your-table>;"
And thats it, your DB was migrated. But I still needed to adjust my code. So if you are not using DrizzleORM, you can stop reading (but I would still appreciate a clap 👏).
Step 4: Adjusting the code
Since I am using an ORM, the code changes should be minimal. And indeed, in my case I only needed to follow the steps from the Drizzle docs to change my connection logic, but there were a few incompatibilities between the column types, so here are a few adjustments that I made:
MySQL: serial("id").primaryKey()
SQLite: integer('id', { mode: 'number' }).primaryKey({ autoIncrement: true })
MySQL:varchar(...)
SQLite: text()
MySQL: boolean(...)
SQLite: integer('id', { mode: 'boolean' })
MySQL: json(...)
SQLite: text('text', { mode: 'json' })
MySQL: timestamp(...)
SQLite: text("timestamp").default(sql`(CURRENT_TIMESTAMP)`),
Wrapping Up
In my opinion, moving fast and adjusting to change is one of the most important skills in entrepreneurship and software development. And a very cool feature of being a solo developer on a project, is that you get to decide and experiment on whatever suits your use case best at a certain point of time. Turso at edge works great for me at the moment, but I will experiment with Turso’s Embedded Syncs to compare overall website latency and I promise to write about the outcome.
If you are interested in what I’m building, check out: AI Collection and feel free to ping me on X/Twitter.