Schema 🗄️ v1.0 MIGRATE Schema 🗄️ v2.0 🔄 Expand 📊 Migrate ✂️ Contract
Database

Database Migration Strategies: A Complete Guide

Mayur Dabhi
Mayur Dabhi
March 21, 2026
22 min read

Database migrations are one of the most critical—and risky—operations in software development. A poorly executed migration can lead to data loss, application downtime, and frustrated users. Yet as applications evolve, schema changes become inevitable. The key is knowing how to migrate safely, efficiently, and with minimal impact on your production systems.

In this comprehensive guide, we'll explore battle-tested migration strategies used by companies handling millions of records. You'll learn when to use each approach, how to implement zero-downtime migrations, and the critical mistakes to avoid. Whether you're renaming a column or restructuring an entire database, these strategies will help you sleep better at night.

What You'll Learn
  • The Expand-Contract (Parallel Change) pattern for zero-downtime migrations
  • Blue-Green deployment strategies for database changes
  • Rolling migrations for distributed systems
  • Version control and rollback strategies
  • Data migration techniques for large tables
  • Tools: Laravel Migrations, Flyway, Liquibase, and more
  • Real-world examples with production-ready code

Understanding Migration Challenges

Before diving into strategies, let's understand why database migrations are challenging. Unlike application code that can be instantly rolled back, database changes often involve:

Data Transformation

Moving, splitting, or merging data across columns and tables. Once transformed, reversing may be impossible.

Lock Contention

Schema changes on large tables can lock rows or entire tables, blocking reads and writes.

Backward Compatibility

Old application versions may still be running during deployment, expecting the old schema.

Consistency Requirements

Ensuring data integrity during the migration window when both schemas might be active.

Migration Risk Spectrum ⚠️ High-Risk Operations DROP COLUMN RENAME TABLE CHANGE COLUMN TYPE ADD NOT NULL ✅ Low-Risk Operations ADD NULLABLE COLUMN ADD INDEX CREATE TABLE ADD DEFAULT VALUE 🎯 The Solution: Break High-Risk into Low-Risk Steps RENAME COLUMN = ADD new + COPY data + UPDATE app + DROP old Expand Migrate Update App Contract

Breaking high-risk operations into safe, reversible steps using the Expand-Contract pattern

The Expand-Contract Pattern

The Expand-Contract pattern (also called Parallel Change) is the gold standard for zero-downtime migrations. Instead of making breaking changes directly, you expand the schema to support both old and new formats, migrate data, update the application, then contract by removing the old format.

Phase 1: Expand

Add new columns, tables, or structures alongside existing ones. The schema now supports both old and new formats. Application continues using old format.

Phase 2: Migrate Data

Copy or transform data from old structure to new. Use background jobs for large datasets. Dual-write to keep both in sync during transition.

Phase 3: Update Application

Deploy new application code that reads/writes to the new structure. Old structure is still available as fallback. Monitor for issues.

Phase 4: Contract

Once confident, remove the old columns/tables. This is the only irreversible step—ensure thorough testing before executing.

Example: Renaming a Column

Let's say we need to rename user_name to username in a users table. Here's how to do it safely:

Migration 1: Expand (add new column)
// database/migrations/2026_03_21_001_add_username_column.php

public function up(): void
{
    Schema::table('users', function (Blueprint $table) {
        // Add new column (nullable initially)
        $table->string('username')->nullable()->after('user_name');
    });
}

public function down(): void
{
    Schema::table('users', function (Blueprint $table) {
        $table->dropColumn('username');
    });
}
Migration 2: Migrate data
// database/migrations/2026_03_21_002_copy_username_data.php

public function up(): void
{
    // For small tables, direct update works
    DB::statement('UPDATE users SET username = user_name WHERE username IS NULL');
    
    // For large tables, use chunking:
    // User::whereNull('username')
    //     ->chunkById(1000, function ($users) {
    //         foreach ($users as $user) {
    //             $user->update(['username' => $user->user_name]);
    //         }
    //     });
}
Model: Dual-write accessor
// app/Models/User.php - During transition period

class User extends Model
{
    // Accessor: read from new column, fallback to old
    public function getUsernameAttribute(): string
    {
        return $this->attributes['username'] 
            ?? $this->attributes['user_name'];
    }

    // Mutator: write to both columns
    public function setUsernameAttribute($value): void
    {
        $this->attributes['username'] = $value;
        $this->attributes['user_name'] = $value; // Keep in sync
    }
}
Migration 3: Contract (remove old column)
// database/migrations/2026_03_28_001_drop_user_name_column.php
// Run this AFTER all app servers are updated!

public function up(): void
{
    Schema::table('users', function (Blueprint $table) {
        $table->dropColumn('user_name');
    });
}
Step 1: Expand
-- V1__add_username_column.sql
ALTER TABLE users 
ADD COLUMN username VARCHAR(255) NULL AFTER user_name;

-- Create index for the new column
CREATE INDEX idx_users_username ON users(username);
Step 2: Migrate data (batched for large tables)
-- V2__copy_username_data.sql
-- For small tables (< 100k rows):
UPDATE users SET username = user_name WHERE username IS NULL;

-- For large tables, use batched updates:
-- Run this script repeatedly until 0 rows affected
UPDATE users 
SET username = user_name 
WHERE username IS NULL 
LIMIT 10000;

-- Check progress
SELECT 
    COUNT(*) as total,
    SUM(CASE WHEN username IS NOT NULL THEN 1 ELSE 0 END) as migrated
FROM users;
Step 3: Contract (after app update)
-- V3__drop_user_name_column.sql
-- ⚠️ Run only after ALL app instances use 'username'

-- First, make the new column NOT NULL
ALTER TABLE users 
MODIFY COLUMN username VARCHAR(255) NOT NULL;

-- Then drop the old column
ALTER TABLE users 
DROP COLUMN user_name;
V1__expand_add_username.sql
-- Flyway migration: Expand phase
ALTER TABLE users 
ADD COLUMN username VARCHAR(255);

-- Trigger to keep columns in sync during transition
DELIMITER //
CREATE TRIGGER sync_username_insert
BEFORE INSERT ON users
FOR EACH ROW
BEGIN
    IF NEW.username IS NULL THEN
        SET NEW.username = NEW.user_name;
    ELSEIF NEW.user_name IS NULL THEN
        SET NEW.user_name = NEW.username;
    END IF;
END//
DELIMITER ;
V2__migrate_username_data.sql
-- Backfill existing data
UPDATE users SET username = user_name WHERE username IS NULL;
V3__contract_drop_user_name.sql
-- Remove sync trigger first
DROP TRIGGER IF EXISTS sync_username_insert;
DROP TRIGGER IF EXISTS sync_username_update;

-- Make new column required
ALTER TABLE users 
MODIFY COLUMN username VARCHAR(255) NOT NULL;

-- Drop old column
ALTER TABLE users 
DROP COLUMN user_name;
Pro Tip: Deployment Timing

The Contract phase (V3) should be deployed days or weeks after the Expand and Migrate phases. This gives you time to:

  • Verify all application instances are using the new column
  • Monitor for any errors or edge cases
  • Roll back easily if issues are discovered

Blue-Green Database Deployments

Blue-Green deployment is a technique where you maintain two identical production environments. For databases, this typically involves maintaining two database instances or schemas, allowing instant cutover and rollback.

Blue-Green Database Deployment 🔵 Blue (Current Production) App v1 🖥️ DB v1 🗄️ 🟢 Green (New Version) App v2 🖥️ DB v2 🗄️ Load Balancer / Router 🔀 Traffic Switch Instant cutover & rollback Data Sync

Blue-Green allows instant traffic switching between database versions

When to Use Blue-Green for Databases

Scenario Blue-Green Suitable? Notes
Read-heavy workloads ✅ Excellent Replicas handle reads; master handles writes
Small databases (<10GB) ✅ Good Fast sync/restore between environments
Large databases (>100GB) ⚠️ Challenging Consider logical replication or CDC instead
High-write workloads ❌ Avoid Data divergence during cutover is risky
Schema-only changes ✅ Good With expand-contract, works well

Rolling Migrations for Distributed Systems

In distributed systems with multiple application servers, rolling migrations allow you to update one node at a time while the others continue serving traffic. The database migration must support both old and new application code running simultaneously.

1

Deploy Backward-Compatible Migration

Run migrations that add new structures without breaking existing code. Use nullable columns, default values, and additive changes only.

2

Rolling Application Update

Update application servers one at a time. Each new server writes to both old and new columns. Old servers continue working with old schema.

3

Backfill Data

Once all servers are updated, run background jobs to populate new columns for existing records that were never dual-written.

4

Cleanup Migration

After verification period, deploy final migration to drop old columns. This can also be done in a rolling fashion.

Migrating Large Tables

When dealing with tables containing millions or billions of rows, standard migration approaches can cause serious issues: long-running transactions, table locks, replication lag, and high CPU usage. Here are strategies to handle large tables safely.

Online Schema Change Tools

Tools like pt-online-schema-change (Percona), gh-ost (GitHub), and lhm (SoundCloud) modify schemas without blocking reads or writes.

gh-ost example
# Add a column to a large table without locking
gh-ost \
  --host="db.example.com" \
  --database="myapp" \
  --table="users" \
  --alter="ADD COLUMN email_verified TINYINT DEFAULT 0" \
  --execute

# How it works:
# 1. Creates a ghost table with new schema
# 2. Copies data in chunks (configurable size)
# 3. Uses binary log to capture changes during copy
# 4. Atomic table swap at the end

Chunked Data Migration

For data transformations, process records in small batches with delays between batches to reduce load.

Laravel Chunked Migration
use Illuminate\Support\Facades\DB;

class MigrateUserEmailsJob implements ShouldQueue
{
    public $timeout = 3600; // 1 hour max
    
    public function handle()
    {
        $lastId = cache('migration:users:last_id', 0);
        $batchSize = 1000;
        $processed = 0;
        
        do {
            $users = DB::table('users')
                ->where('id', '>', $lastId)
                ->whereNull('email_normalized')
                ->orderBy('id')
                ->limit($batchSize)
                ->get();
            
            foreach ($users as $user) {
                DB::table('users')
                    ->where('id', $user->id)
                    ->update([
                        'email_normalized' => strtolower($user->email)
                    ]);
                $lastId = $user->id;
                $processed++;
            }
            
            // Save progress
            cache(['migration:users:last_id' => $lastId], now()->addDays(7));
            
            // Throttle to reduce DB load
            usleep(100000); // 100ms between batches
            
        } while ($users->count() === $batchSize);
        
        logger()->info("Migration complete: {$processed} users processed");
    }
}

Shadow Tables Pattern

Create a new table with the desired schema, populate it incrementally, then swap tables atomically.

Shadow Table Migration
-- Step 1: Create shadow table
CREATE TABLE users_new LIKE users;
ALTER TABLE users_new ADD COLUMN status ENUM('active', 'inactive') DEFAULT 'active';

-- Step 2: Copy data in batches (run repeatedly)
INSERT INTO users_new (id, name, email, created_at, status)
SELECT id, name, email, created_at, 'active'
FROM users
WHERE id > (SELECT COALESCE(MAX(id), 0) FROM users_new)
LIMIT 10000;

-- Step 3: Set up triggers for ongoing changes
CREATE TRIGGER users_insert_shadow
AFTER INSERT ON users
FOR EACH ROW
INSERT INTO users_new VALUES (NEW.id, NEW.name, NEW.email, NEW.created_at, 'active');

-- Step 4: Atomic swap when ready
RENAME TABLE users TO users_old, users_new TO users;

Rollback Strategies

Every migration should have a rollback plan. The best time to plan for failure is before you need it.

Backup Before Migration

Take a point-in-time backup immediately before running migrations. Test restore procedures regularly.

Write Down() Methods

Every migration should have a working rollback. Test rollbacks in staging before production.

Feature Flags for App Changes

Use feature flags to toggle between old and new code paths. Allows instant rollback without deployment.

Irreversible Migrations

DROP COLUMN, DROP TABLE, and data transformations may be irreversible. Document and require explicit approval.

Feature Flag for Database Migration
class UserRepository
{
    public function getUsername(User $user): string
    {
        // Feature flag allows instant rollback
        if (feature('use-new-username-column')) {
            return $user->username;
        }
        
        return $user->user_name;
    }
    
    public function updateUsername(User $user, string $username): void
    {
        // Always dual-write during transition
        $user->update([
            'username' => $username,
            'user_name' => $username,
        ]);
    }
}

Migration Tools Comparison

Choosing the right migration tool depends on your stack, team size, and requirements. Here's a comparison of popular options:

Tool Language Best For Key Features
Laravel Migrations PHP Laravel projects Schema builder, seeders, factory integration
Flyway Java/SQL Enterprise, Java apps SQL-based, version control, CI/CD integration
Liquibase XML/YAML/SQL Complex schemas Diff generation, rollback, multiple DB support
Prisma Migrate TypeScript Node.js projects Schema-first, type-safe, shadow database
Alembic Python SQLAlchemy projects Autogenerate, branching, offline mode
gh-ost Go Large MySQL tables Online schema change, no triggers, pausable

Migration Checklist

Use this checklist before running any production migration:

Pre-Migration Checklist

Backup verified and restore tested
Migration tested in staging with production-like data
Rollback procedure documented and tested
Monitoring dashboards ready
Team notified and on standby
Maintenance window scheduled (if needed)
Application compatible with both old and new schema
Common Mistakes to Avoid
  • Running untested migrations in production — Always test with production-like data volumes
  • Forgetting about long-running transactions — Set statement timeouts
  • Deploying app and migration together — Deploy migration first, then app (or use expand-contract)
  • Not monitoring replication lag — Large migrations can cause replicas to fall behind
  • Assuming rollback is always possible — Some changes destroy data irreversibly

Conclusion

Database migrations don't have to be scary. By following the strategies outlined in this guide—especially the Expand-Contract pattern—you can make schema changes safely, without downtime, and with confidence that you can roll back if needed.

Remember the key principles:

With these strategies in your toolkit, you can evolve your database schema as your application grows, keeping your users happy and your data safe.

Migration Database Best Practices Zero Downtime Laravel MySQL
Mayur Dabhi

Written by Mayur Dabhi

Full-stack developer passionate about building scalable web applications and sharing knowledge with the developer community.