Copy production database data to staging or local database with only a Laravel Seed

I recently had a need to be able to migrate and seed a staging database with live/production database data. The way I managed to do this was through 2 database connections and a specific staging environment only seed file.

FAST FWD* The code for staging seeder is this:

// Connect to production database
$live_database = DB::connection('mysql');
// Get table data from production
foreach($live_database->table('table_name')->get() as $data){
     // Save data to staging database - default db connection
DB::table('table_name')->insert((array) $data);
}

Want more detail? Here is a full explanation on how it all works.

First copy the mysql connection in your database/config.php file and paste it below the old one, then rename the connection name to mysql-staging . Add new environment variables to the staging connection. Here is what mine looks like.

'mysql' => [
'driver' => 'mysql',
'host' => env('DB_HOST', '127.0.0.1'),
'port' => env('DB_PORT', '3306'),
'database' => env('DB_DATABASE', 'forge'),
'username' => env('DB_USERNAME', 'forge'),
'password' => env('DB_PASSWORD', ''),
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
'strict' => true,
'engine' => null,
],
'mysql-staging' => [
'driver' => 'mysql',
'host' => env('DB_STAGING_HOST', '127.0.0.1'),
'port' => env('DB_STAGING_PORT', '3306'),
'database' => env('DB_STAGING_DATABASE', 'forge'),
'username' => env('DB_STAGING_USERNAME', 'forge'),
'password' => env('DB_STAGING_PASSWORD', ''),
'charset' => 'utf8mb4',
'collation' => 'utf8mb4_unicode_ci',
'prefix' => '',
'strict' => true,
'engine' => null,
],

Next you need to go into your .env and add the variables:

DB_CONNECTION=mysql-staging
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=mydatabase
DB_USERNAME=homestead
DB_PASSWORD=secret
DB_STAGING_HOST=127.0.0.1
DB_STAGING_PORT=3306
DB_STAGING_DATABASE=mydatabase-staging
DB_STAGING_USERNAME=homestead
DB_STAGING_PASSWORD=secret

The .env on your staging server should have a default DB_CONNECTION=mysql-staging This is because we want all DB calls to save to staging. For the accessing data part we will specify the mysql connection manually.

Next let’s add a new class to our seeds folder with

php artisan make:seed StagingSeeder

Now let’s open that file and add the following to the run function. Where you see table_name replace that with the table you want to bring over to the staging server. Then simply repeat it for every table in your database IN THE ORDER OF YOUR MIGRATIONS.

NOTE ** This approach works well if you have a small to medium amount of tables. If you have more you may want to just use an automated mysql backup package like this one. Alternatively you could fake the data using factories.

<?php
use Illuminate\Database\Seeder;
class StagingSeeder extends Seeder {
  public function run() {
     // Connect to production database
$live_database = DB::connection('mysql');
     // Get table data from production
foreach($live_database->table('table_name')->get() as $data){
        // Save data to staging database - default db connection
DB::table('table_name')->insert((array) $data);
     }
     // Get table_2 data from production
foreach($live_database->table('table_2_name')->get() as $data){
        // Save data to staging database - default db connection
DB::table('table_2_name')->insert((array) $data);
     }
  }
}

Once you’re done just add the code below to your DatabaseSeeder.php file in your seeds folder. Make sure your environment variable is set to staging in your .env on the staging server.

if(app('env') == "staging"){
     // Staging Seeder
$this->call(StagingSeeder::class);
}

Want to test locally? Set your local .env to local then migrate + seed. Then switch your .env environment variable to staging and then run php artisan migrate:fresh --seed in order to migrate and clone your “production” database to the staging database. After this you should have in-sync staging + production data when ever you seed your staging environment.

PS. I tried to loop through the table names in the information_schema table but turns out the the database tables are migrated too fast, and mysql only keeps track of create_time in seconds. Too many tables share a second in order to properly determine the migration order. This was the main reason I couldn’t loop through and have to manually write each table.

— — — — —

Want to work together? I design and code at my company Firescript feel free to drop by and say hey via drift. Not your style? I’m on Twitter, Instagram & Facebook as well.