Database

Database Transactions: ACID Properties

Mayur Dabhi
Mayur Dabhi
April 15, 2026
14 min read

Imagine you're transferring $500 from your savings account to your checking account. The bank's system debits your savings first, then credits your checking. But what happens if the system crashes between those two operations? You'd lose $500 with it appearing nowhere — a financial disaster. This is exactly the problem that database transactions and ACID properties were designed to solve. Understanding them is foundational to building reliable, production-grade software.

Why ACID Matters

Every serious application — from banking systems to e-commerce carts to social networks — depends on ACID transactions to guarantee data integrity. A single missed transaction violation can corrupt data silently, leading to bugs that are extremely difficult to trace and fix later.

What is a Database Transaction?

A transaction is a sequence of one or more database operations (reads and writes) that are treated as a single logical unit of work. A transaction either completes entirely — committing all its changes — or rolls back completely, leaving the database as if the transaction never happened.

The classic example is a bank transfer. Consider the following operations:

  1. Read the balance of Account A
  2. Deduct $500 from Account A
  3. Read the balance of Account B
  4. Add $500 to Account B

All four steps must succeed together. If step 2 succeeds but step 4 fails, money disappears. Transactions guarantee this "all or nothing" behavior.

SQL — Basic Transaction Syntax
-- Start a transaction
BEGIN;

-- Deduct from Account A
UPDATE accounts SET balance = balance - 500 WHERE id = 1;

-- Add to Account B
UPDATE accounts SET balance = balance + 500 WHERE id = 2;

-- If everything succeeded, commit
COMMIT;

-- If something went wrong, roll back all changes
-- ROLLBACK;
BEGIN Operations UPDATE accounts INSERT orders DELETE temp... OK? COMMIT ROLLBACK Yes No

Transaction lifecycle: BEGIN → Operations → COMMIT or ROLLBACK

ACID: The Four Pillars

ACID stands for Atomicity, Consistency, Isolation, and Durability. These four properties define what it means for a database to reliably process transactions. They were formally described by Jim Gray and Andreas Reuter in 1992 and remain the gold standard for transactional databases today.

Property Guarantee Failure Scenario Prevented
Atomicity All or nothing Partial updates leaving corrupt state
Consistency Valid state before and after Constraint violations, invalid data
Isolation Concurrent transactions don't interfere Race conditions, dirty reads
Durability Committed data survives failures Data loss after crash or power failure

Atomicity — All or Nothing

Atomicity guarantees that a transaction is treated as a single indivisible unit. Either all operations in the transaction succeed and are committed, or none of them are applied and the database is left unchanged. There is no in-between state.

The word "atomic" comes from the Greek atomos, meaning "indivisible." The database engine achieves atomicity using a write-ahead log (WAL) or undo log — it records intended changes before applying them, so it can roll back if anything goes wrong.

MySQL — Atomicity in Action
START TRANSACTION;

-- Step 1: Deduct from sender
UPDATE accounts
SET balance = balance - 500
WHERE user_id = 1 AND balance >= 500;

-- Check if the update actually affected a row
-- (balance was sufficient)
-- In application code, check affected rows here

-- Step 2: Credit to receiver
UPDATE accounts
SET balance = balance + 500
WHERE user_id = 2;

-- Step 3: Log the transfer
INSERT INTO transfer_log (from_user, to_user, amount, created_at)
VALUES (1, 2, 500, NOW());

-- All succeeded — commit
COMMIT;

-- If any step failed, the application would call:
-- ROLLBACK;
-- And the database reverts to the state before BEGIN
Common Atomicity Mistake

Not all databases enable transactions by default. MySQL with the MyISAM storage engine does NOT support transactions — only InnoDB does. Always use InnoDB for any table that requires transactional integrity. Modern MySQL defaults to InnoDB, but double-check legacy databases.

Atomicity in Application Code

Most frameworks provide transaction helpers so you don't have to manually write BEGIN/COMMIT/ROLLBACK:

// Laravel automatically handles ROLLBACK on exception
DB::transaction(function () use ($fromId, $toId, $amount) {
    // Deduct from sender
    Account::where('user_id', $fromId)
        ->decrement('balance', $amount);

    // Credit receiver
    Account::where('user_id', $toId)
        ->increment('balance', $amount);

    // Log the transfer
    TransferLog::create([
        'from_user' => $fromId,
        'to_user'   => $toId,
        'amount'    => $amount,
    ]);
});
// If any exception is thrown inside, Laravel auto-rolls back
const t = await sequelize.transaction();

try {
    await Account.decrement('balance', {
        by: amount,
        where: { userId: fromId },
        transaction: t
    });

    await Account.increment('balance', {
        by: amount,
        where: { userId: toId },
        transaction: t
    });

    await TransferLog.create(
        { fromUser: fromId, toUser: toId, amount },
        { transaction: t }
    );

    await t.commit();
} catch (error) {
    await t.rollback();
    throw error;
}
from sqlalchemy.orm import Session

def transfer_funds(db: Session, from_id: int, to_id: int, amount: float):
    try:
        # SQLAlchemy sessions are transactional by default
        sender = db.query(Account).filter_by(user_id=from_id).one()
        receiver = db.query(Account).filter_by(user_id=to_id).one()

        if sender.balance < amount:
            raise ValueError("Insufficient funds")

        sender.balance -= amount
        receiver.balance += amount

        db.add(TransferLog(
            from_user=from_id, to_user=to_id, amount=amount
        ))

        db.commit()
    except Exception:
        db.rollback()
        raise

Consistency — Valid State Before and After

Consistency ensures that a transaction brings the database from one valid state to another. Any data written must satisfy all defined rules — including constraints, cascades, triggers, and application-level invariants. If a transaction would violate any rule, it must be rejected.

There are two layers of consistency:

SQL — Consistency via Constraints
CREATE TABLE accounts (
    id          INT PRIMARY KEY AUTO_INCREMENT,
    user_id     INT NOT NULL,
    balance     DECIMAL(15, 2) NOT NULL DEFAULT 0.00,

    -- CHECK constraint: balance can never go below zero
    CONSTRAINT chk_balance CHECK (balance >= 0),

    -- Foreign key: user must exist
    CONSTRAINT fk_user FOREIGN KEY (user_id)
        REFERENCES users(id) ON DELETE CASCADE
);

-- This will FAIL — consistency violation
UPDATE accounts SET balance = -100 WHERE id = 1;
-- ERROR 3819: Check constraint 'chk_balance' is violated.

-- This will ALSO FAIL — foreign key violation
INSERT INTO accounts (user_id, balance) VALUES (9999, 100.00);
-- ERROR 1452: Cannot add or update a child row: a foreign key constraint fails

Note that Consistency is the one ACID property that depends partly on the application developer — the database can only enforce rules you explicitly define. If your business rule is "an order total must equal the sum of its line items," that must be validated in code, not just trusted to the DB.

Isolation — Concurrent Transactions Don't Interfere

Isolation is arguably the most complex ACID property. It determines how and when changes made by one transaction become visible to other concurrent transactions. Without isolation, simultaneous operations on shared data can produce incorrect results even when each transaction is individually correct.

Isolation Anomalies

Without proper isolation, several anomalies can occur:

Anomaly Description Example
Dirty Read Reading uncommitted changes from another transaction Tx B reads a balance updated by Tx A before Tx A commits or rolls back
Non-repeatable Read Same query returns different results within one transaction Tx A reads a row twice; Tx B updates it between the reads
Phantom Read A query returns different sets of rows when re-executed Tx A counts rows twice; Tx B inserts a new row between the counts
Lost Update One transaction's update overwrites another's Tx A and Tx B both read balance=100, both add 50 — result is 150 not 200

Isolation Levels

The SQL standard defines four isolation levels, each preventing a different set of anomalies at the cost of more or less concurrency:

← More Concurrency / Less Safety More Safety / Less Concurrency → READ UNCOMMITTED Dirty Read: ✓ Non-repeat: ✓ Phantom: ✓ Fastest READ COMMITTED Dirty Read: ✗ Non-repeat: ✓ Phantom: ✓ PG default REPEATABLE READ Dirty Read: ✗ Non-repeat: ✗ Phantom: ✓* MySQL default SERIALIZABLE Dirty Read: ✗ Non-repeat: ✗ Phantom: ✗ Slowest

SQL Isolation Levels and the anomalies they prevent (* MySQL's InnoDB prevents phantoms at REPEATABLE READ using MVCC gap locks)

SQL — Setting Isolation Levels
-- MySQL: Set isolation level for next transaction
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
START TRANSACTION;
-- ... your queries ...
COMMIT;

-- PostgreSQL: Set for the current transaction
BEGIN ISOLATION LEVEL REPEATABLE READ;
-- ... your queries ...
COMMIT;

-- Check current isolation level (MySQL)
SELECT @@transaction_isolation;

-- Check current isolation level (PostgreSQL)
SHOW transaction_isolation;

How Isolation Is Implemented: MVCC

Most modern databases (PostgreSQL, MySQL/InnoDB, SQLite) implement isolation using Multi-Version Concurrency Control (MVCC). Instead of locking rows to block readers, MVCC keeps multiple versions of each row. Each transaction sees a consistent snapshot of the database from the moment it started, allowing reads and writes to proceed concurrently without blocking each other.

MVCC Performance Insight

Because MVCC readers never block writers and writers never block readers, databases like PostgreSQL can serve high read throughput even under heavy write load. The trade-off is periodic vacuum/autovacuum operations that reclaim storage from old row versions. Understanding MVCC explains why long-running transactions are harmful — they prevent old versions from being cleaned up, causing table bloat.

Durability — Committed Data Survives

Durability guarantees that once a transaction is committed, it remains committed — even in the event of a system crash, power failure, or database restart. The data must be persisted to non-volatile storage before the commit is acknowledged to the client.

Databases achieve durability through a Write-Ahead Log (WAL) — every change is first written to a log file on disk before being applied to the data files. On recovery after a crash, the database replays the WAL to restore committed transactions that may not have been flushed to the main data files.

1

Transaction Writes to WAL

Before modifying any data page, the database writes a log record describing the change (before and after images) to the Write-Ahead Log on disk.

2

WAL is Fsynced to Disk

The database calls fsync() to force the OS to flush WAL buffers to physical storage. Only after this completes is durability guaranteed.

3

COMMIT Acknowledged

The database responds "COMMIT successful" to the client. At this point, the transaction is durable — even if the server crashes a millisecond later, the data is safe.

4

Data Pages Written Asynchronously

The actual data file pages (heap, B-tree indexes) are updated in the background. If a crash occurs before this, the WAL is replayed during recovery to reconstruct the changes.

PostgreSQL — Durability Configuration
-- postgresql.conf key settings for durability

-- MUST be on for full durability (default: on)
-- Turning off risks data loss on crash for speed
fsync = on

-- Controls WAL sync method (varies by OS)
-- fdatasync, fsync, open_sync, open_datasync
wal_sync_method = fdatasync

-- Synchronous commit: wait for WAL flush before acknowledging
-- off = async commit (faster, slight data loss risk on crash)
-- on = synchronous commit (default, fully durable)
synchronous_commit = on

-- MySQL InnoDB equivalent:
-- innodb_flush_log_at_trx_commit = 1 (fully durable, default)
-- innodb_flush_log_at_trx_commit = 0 (flush every second — risky)
-- innodb_flush_log_at_trx_commit = 2 (flush on commit, OS may buffer)
Durability Trade-off Warning

Some teams disable fsync or set synchronous_commit = off for performance in development or analytics workloads. Never do this on a production transactional database. A single power failure can corrupt the entire database or lose recently committed transactions. The performance gain rarely justifies the risk.

Savepoints — Partial Rollbacks

Transactions are all-or-nothing by default, but SQL provides savepoints — named checkpoints within a transaction that allow you to roll back to a specific point without aborting the entire transaction. This is useful for complex operations where you want to retry individual steps.

SQL — Savepoints
BEGIN;

-- Step 1: Create order header
INSERT INTO orders (user_id, total) VALUES (42, 150.00);
-- Save current point
SAVEPOINT after_order;

-- Step 2: Add line items
INSERT INTO order_items (order_id, product_id, quantity, price)
VALUES (LAST_INSERT_ID(), 7, 2, 50.00);

INSERT INTO order_items (order_id, product_id, quantity, price)
VALUES (LAST_INSERT_ID(), 12, 1, 50.00);

-- Oops — one item is out of stock, but we can keep the order
ROLLBACK TO SAVEPOINT after_order;
-- order row still exists, but line items were undone

-- Insert different items
INSERT INTO order_items (order_id, product_id, quantity, price)
VALUES (LAST_INSERT_ID(), 5, 3, 50.00);

-- Release savepoint (optional cleanup)
RELEASE SAVEPOINT after_order;

COMMIT;

Distributed Transactions and the CAP Theorem

ACID is well-understood for single-node databases, but what about distributed systems — where data is spread across multiple servers or microservices? This is where things get significantly harder.

Two-Phase Commit (2PC)

The traditional approach to distributed transactions is the Two-Phase Commit (2PC) protocol. A coordinator asks all participant nodes to "prepare" (phase 1), and if all agree, instructs them to "commit" (phase 2). While correct, 2PC is slow, blocking, and problematic if the coordinator crashes.

Coordinator Node A (DB1) Node B (DB2) Node C (DB3) Phase 1: PREPARE? Phase 2: COMMIT

Two-Phase Commit (2PC): Coordinator orchestrates prepare and commit across all nodes

The CAP Theorem

The CAP Theorem states that a distributed system can guarantee at most two of three properties simultaneously:

Since network partitions are unavoidable in distributed systems, real-world systems must choose between CP (consistent but may be unavailable during partition — e.g., HBase, ZooKeeper) or AP (available but may return stale data — e.g., Cassandra, DynamoDB). Traditional RDBMS on a single node is CA but not partition-tolerant by nature.

Transactions and Performance

Transactions are not free. They introduce overhead from locking, logging, and coordination. Here are practical techniques to keep transactions performant:

Transaction Performance Best Practices

  • Keep transactions short: Long transactions hold locks longer, blocking other queries. Do all computation outside the transaction, then execute the DB writes quickly.
  • Avoid user interaction inside transactions: Never wait for user input (HTTP request, API call) while a transaction is open — it could block for seconds or minutes.
  • Choose the right isolation level: Use READ COMMITTED for most OLTP workloads unless you need stronger guarantees. SERIALIZABLE is rarely necessary and significantly reduces throughput.
  • Index your WHERE clauses: Unindexed updates inside transactions acquire more locks (table scans lock more rows), increasing contention.
  • Batch wisely: For bulk inserts or updates, use chunked transactions (e.g., 1000 rows per transaction) rather than one giant transaction or one transaction per row.
  • Monitor lock wait timeouts: Configure innodb_lock_wait_timeout (MySQL) or lock_timeout (PostgreSQL) to prevent indefinite blocking in production.
SQL — Chunked Batch Processing
-- Instead of one massive transaction for 1 million rows:
-- BEGIN; UPDATE ... (1M rows); COMMIT;  ← BAD

-- Use chunked transactions for bulk operations
DO $$
DECLARE
    batch_size INT := 1000;
    offset_val INT := 0;
    affected   INT;
BEGIN
    LOOP
        BEGIN
            UPDATE users
            SET    tier = 'gold'
            WHERE  total_spend > 1000
              AND  tier != 'gold'
              AND  id > offset_val
            LIMIT batch_size;

            GET DIAGNOSTICS affected = ROW_COUNT;
            EXIT WHEN affected = 0;

            offset_val := offset_val + batch_size;
            COMMIT;
        END;
    END LOOP;
END $$;

Key Takeaways

Database transactions and ACID properties are the bedrock of data integrity for any serious application. Let's recap what each property does for you:

ACID Summary

  • Atomicity — Your bank transfer either completes fully or not at all. Use BEGIN/COMMIT/ROLLBACK (or framework helpers) to wrap related operations.
  • Consistency — Define constraints (NOT NULL, FOREIGN KEY, CHECK) at the database level, and enforce business rules in application code. Never trust the client to send valid data.
  • Isolation — Choose the right isolation level for your workload. READ COMMITTED is a sensible default for most web applications. Use SERIALIZABLE only when correctness demands it.
  • Durability — Keep fsync = on and synchronous_commit = on in production. The performance cost is worth the guarantee that committed data survives a crash.
"ACID properties don't make databases slow — they make databases correct. And correct is what your users actually need."
— Jim Gray, Turing Award Winner and co-author of Transaction Processing

Understanding transactions deeply will transform how you design schemas, write queries, and build application logic. The next time you reach for a framework's transaction helper, you'll know exactly what guarantees it's providing — and what could go wrong without it.

Transactions ACID Database MySQL PostgreSQL SQL Data Integrity
Mayur Dabhi

Mayur Dabhi

Full Stack Developer with 5+ years of experience building scalable web applications with Laravel, React, and Node.js.