goodylabs office

Save your MySQL database from running out of disk space

This week I had a sleepless night because we were running out of disk space on one of our production systems, yet we did not want to lose incoming data.

In our case 99% of all data in this system is stored in events table. We gather current data and having done some analysis, we store it in another system on a daily basis. Then we safely delete exported records from the events table.

All was good up to a point when the daily amount of data started to grow heavily. When you go beyond the point that your table size is more than 45% of your disk space, you cannot do

OPTIMIZE TABLE tablename;

any more as it requires another 45% of disk space for doing the copy of the analysed table omitting deleted rows while copying the valid records.

Of course, it is a production system, so you cannot allow almost any downtime of the database.

Here is the algorithm of how to save your database on production without major downtime from running out of disk space:

1. Have a backup

As a rule of thumb, you should always have a backup of your production database — just to be safe.

2. Stop the application

Stop the application that uses your database, so that no new record will be added during the process (the entire thing should not take more than 30 seconds, if you prepare code snipets for it).

3. Dump the unexported records to a file on another server:

mysqldump --lock-tables=false --no-create-db --no-create-info -u root -p[password] --host=db_host --port=3306 app_db_prod events --where=”id > [last_exported_id]” > app_db_prod_events_unexported.sql

4. Show last ID of events table as TRUNCATE will reset your sequence for IDs

mysql --host db_host --port=3306 -u root app_db_prod -p[password] --batch -e “SELECT id FROM events ORDER BY id DESC LIMIT 1” | grep -v “id”

5. Delete all data from events table — this is the only way to reclaim disk space if you do not have enough space for doing OPTIMIZE TABLE

mysql --host=db_host --port=3306 -u root -p[password] app_db_prod --batch -e “TRUNCATE TABLE events

6. Change the sequence for new IDs to [last_exported_id + 1]

mysql --host=db_host --port=3306 -u root -p[password] app_db_prod --batch -e “ALTER TABLE events auto_increment=[last_exported_id+1]”

7. Start the application, so that new records can be added as soon as possible to minimize downtime

8. Import unexported records from the dump

mysql --host=db_host --port=3306 -u root -p[password] app_db_prod < app_db_prod_events_unexported.sql

Assuming that points 3,4,5 and 6 can be done within seconds, you can reclaim the disk space quickly and reimport unexported records to the database safely.


Have fun being a DB hero!