Adding Validations to Existing Models (Part 2)
At ACL, one of the many services that we build and maintain is a 5+ year old Ruby on Rails application. Over the years, the model validations have changed over and over. Last year, I wrote a blog post about adding validations to existing models. In it, I explained that adding and changing validations to existing models could be potentially dangerous because there could be existing records that violate the new validations. In this blog post, I will explain how to safely migrate the database if requirements have changed and new validations need to be added which makes existing data “invalid”.
General Best Practices
Whenever we add a new validation or modify an existing one, we try to stick to the following best practices:
Check and clean your production database before adding the new validation.
The existing data should not violate any new validations. If the cleaning or checking happens after a deploy, users may not be able to update existing records.
Add corresponding database constraints where possible.
Sometimes we need to skip validations in Rails when updating records. Database constraints provide a final, hard defence against invalid data that should never exist.
Don’t clean the data in a migration.
Migrations happen in a transaction. If a table becomes locked, it will stay locked until the end of the migration. Updating many records could potentially take a long time.
Clean data in a rake task.
Cleaning data with a rake task works around the transaction problem with migrations. Rake tasks can also be run multiple times, in case data somehow becomes invalid again due to a bug.
Have a ‘dry run’ mode for your rake tasks.
Dry run mode should be exactly the same as the full rake task except it doesn’t actually save or delete any records. This will allow for a count of ‘invalid’ records and gives an idea of how long the the full rake task will take.
Test your rake tasks or migrations with a clone of production data
If possible, spin up an exact clone of production and test your rake tasks or migrations there. For rake tasks, this will tell you exactly how much time, memory or CPU will be required. This also replaces the dry run mode. For migrations with constraints, running it on the production clone will tell you if there are any records that violate new constraints. You don’t want to find this out during a real deployment to production.
Have an ‘undo’ button.
Backup the database before performing surgery on it. You may accidentally cut out the healthy cells instead of the cancer.
Don’t create invalid data in tests.
If database constraints or proper validations are missing, it is possible to create broken test data such as a post without a poster if the presence validation is not there. Tests should mimic the behaviour of the application. Constantly testing against a garbage data set will increase the chances of garbage making its way into the production database.
Let’s walk through an example of a group of models that lack validations and constraints and begin adding them. Along the way, we’ll remove records that violate the new validations. This will help solidify the best practices we recommended above.
Let’s assume we have the following:
We want to ensure the following:
- [Presence] email, locale and timezone are not blank/null
- [Inclusion] timezone is in ActiveSupport::TimeZone
- [Referential integrity] profiles always belong to a user
- [Uniqueness/has one] users have exactly one profile
- [Uniqueness across multiple fields] a user can only have a single membership to an organization
We’ll start by adding presence validations and not null constraints to all relevant fields. Checking for existing offending records is relatively simple:
ActiveRecord has an inclusion validation we can use. However, there’s no simple way to add the corresponding constraint.
This is where it gets a bit more interesting. The most obvious thing to do is add a foreign key constraint but that may not enough. This constraint will prevent profiles from referencing users that don’t exist but it doesn’t prevent null values. In some cases we want to allow null values on foreign keys but not in this case.
First, we should add an index to speed up the search for duplicate records. This will be required for tables with 100k+ records. With the index in place, we can remove the duplicate records. Finally, once all duplicates are removed, we can drop the index and add it back with a unique constraint.
Removing the duplicates is tricky. Which record do you keep? At the time of writing this blog post, we have observed that ActiveRecord/Postgres finds the most recently updated record when querying for a has_one reference. This will mostly likely mean you want to keep the record with the highest ID. This may be different for different versions of ActiveRecord, Postgres or other databases.
Unique across multiple fields
This is quite similar to single field uniqueness. The only difference is we need to add an extra field to the index and a scope to the validation.
The extra validations and constraints help but it’s not enough. There will always be changes to the application models that could potentially invalidate existing data. Currently, we spin up a clone of production every month and running a rake task that looks for ‘invalid’ data and proactively prevent them from occurring. This is a bit time consuming because we a large deployment with multiple geological production regions and multiple Rails applications at ACL. So, like any other ‘lazy’ developer, we intend to automate this task. Expect another blog post when we achieve continuous monitoring of our databases!