DevOps

Deploying Applications with Docker Compose

Mayur Dabhi
Mayur Dabhi
April 13, 2026
14 min read

Modern web applications rarely run as a single process. A typical stack includes a web server, an application runtime, a database, a cache layer, and often a message queue โ€” all needing to communicate with each other while staying isolated from the host. Docker Compose solves exactly this problem: it lets you define, configure, and run a multi-container Docker application with a single declarative YAML file and a single command. In this guide, you'll go from understanding core Compose concepts all the way to deploying a production-ready application stack.

Prerequisites

This guide assumes you have Docker Engine (20.10+) and Docker Compose v2 installed. Run docker compose version to verify. All examples use the modern docker compose command (note: no hyphen), which is now built into Docker Desktop and Docker Engine.

Why Docker Compose Exists

Before Compose, developers managed multi-container setups by running a series of docker run commands with lengthy flags for networking, volumes, and environment variables. This was error-prone, hard to share across a team, and nearly impossible to reproduce consistently.

Docker Compose solves this with a declarative approach: you describe your entire application stack in a docker-compose.yml file โ€” services, networks, volumes, environment variables, dependencies โ€” and Compose handles orchestrating them. The benefits are immediate:

Docker Host app-network (bridge) nginx :80 โ†’ :80 Reverse Proxy app :3000 (internal) Node.js / PHP db :5432 (internal) PostgreSQL redis Cache / Sessions User :80 ๐Ÿ“ฆ postgres_data Docker Compose Multi-Container Stack

A typical Docker Compose application stack with internal networking

Anatomy of a docker-compose.yml File

Everything in Docker Compose revolves around the docker-compose.yml (or compose.yaml) file. Let's break down the key building blocks with a real-world Node.js + PostgreSQL + Redis example.

docker-compose.yml
version: "3.9"

services:
  # โ”€โ”€ Application โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  app:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: myapp
    restart: unless-stopped
    environment:
      NODE_ENV: production
      DATABASE_URL: postgresql://postgres:secret@db:5432/myapp
      REDIS_URL: redis://redis:6379
    ports:
      - "3000:3000"
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_started
    networks:
      - app-network
    volumes:
      - ./uploads:/app/uploads

  # โ”€โ”€ Database โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  db:
    image: postgres:16-alpine
    container_name: myapp_db
    restart: unless-stopped
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: myapp
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    networks:
      - app-network
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  # โ”€โ”€ Cache โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  redis:
    image: redis:7-alpine
    container_name: myapp_redis
    restart: unless-stopped
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data
    networks:
      - app-network

  # โ”€โ”€ Reverse Proxy โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
  nginx:
    image: nginx:alpine
    container_name: myapp_nginx
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/ssl:/etc/nginx/ssl:ro
    depends_on:
      - app
    networks:
      - app-network

# โ”€โ”€ Named Volumes โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
volumes:
  postgres_data:
  redis_data:

# โ”€โ”€ Networks โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
networks:
  app-network:
    driver: bridge

Key Concepts Explained

Key Purpose Example
build Build image from local Dockerfile build: .
image Pull pre-built image from registry image: postgres:16-alpine
ports Map host port to container port "8080:80" (host:container)
volumes Persist data or mount host files postgres_data:/var/lib/postgresql/data
networks Connect containers on a shared network app-network
depends_on Control startup order between services depends_on: db
restart Auto-restart policy on failure unless-stopped, always
healthcheck Define a health probe for the container pg_isready -U postgres
depends_on is Not a Silver Bullet

depends_on controls startup order but does not wait for a service to be ready to accept connections โ€” unless you use condition: service_healthy with a defined healthcheck. Without a healthcheck, your app container may start before the database has finished initializing, causing connection errors.

Writing an Optimized Dockerfile

Compose builds your image using a Dockerfile. A well-structured Dockerfile is critical for fast builds and small image sizes. The key technique is multi-stage builds โ€” you build in one stage and copy only the artifacts you need into a lean production image.

Dockerfile (Node.js โ€” multi-stage)
# โ”€โ”€ Stage 1: Build โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
FROM node:20-alpine AS builder

WORKDIR /app

# Copy dependency files first (layer caching)
COPY package*.json ./
RUN npm ci --only=production

# Copy source code
COPY . .

# Build if using TypeScript or a bundler
RUN npm run build

# โ”€โ”€ Stage 2: Production image โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
FROM node:20-alpine AS runner

# Security: run as non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup

WORKDIR /app

# Copy only production artifacts from builder
COPY --from=builder --chown=appuser:appgroup /app/node_modules ./node_modules
COPY --from=builder --chown=appuser:appgroup /app/dist ./dist
COPY --from=builder --chown=appuser:appgroup /app/package.json .

USER appuser

EXPOSE 3000

CMD ["node", "dist/server.js"]
.dockerignore
node_modules
.git
.gitignore
*.md
.env
.env.*
dist
coverage
.nyc_output
*.log
docker-compose*.yml
Dockerfile*

The .dockerignore file tells Docker's build context what to exclude โ€” keeping your build context small and preventing sensitive files like .env from ever entering the image layer.

Networking Between Services

One of the most powerful features of Docker Compose is its automatic DNS-based service discovery. Services within the same Compose network can reach each other by their service name โ€” no IP addresses required.

Service-to-service communication
# In your app, connect to the database using the service name "db"
# NOT localhost or 127.0.0.1 โ€” those refer to the container itself

# Node.js (pg client)
const pool = new Pool({
  host: 'db',       // โ† the Compose service name
  port: 5432,
  database: 'myapp',
  user: 'postgres',
  password: 'secret',
});

# Redis (ioredis)
const redis = new Redis({
  host: 'redis',    // โ† the Compose service name
  port: 6379,
});

# Or via connection string in environment variables
DATABASE_URL=postgresql://postgres:secret@db:5432/myapp
REDIS_URL=redis://redis:6379

Multiple Networks for Security Isolation

You can segment your stack into multiple networks to enforce security boundaries. The classic pattern is a frontend network (nginx โ†” app) and a backend network (app โ†” db). The database is never reachable from the nginx container directly.

Network segmentation pattern
services:
  nginx:
    networks:
      - frontend        # only on frontend network

  app:
    networks:
      - frontend        # speaks to nginx
      - backend         # speaks to db and redis

  db:
    networks:
      - backend         # only on backend โ€” unreachable from nginx

  redis:
    networks:
      - backend

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true      # no external internet access

Managing Environment Variables Securely

Hardcoding secrets in docker-compose.yml is a serious security risk โ€” that file usually ends up in version control. The right approach is a combination of .env files and Docker secrets.

1

Create a .env file (never commit this)

Store all sensitive values in a .env file at the project root and add it to .gitignore.

.env
POSTGRES_USER=myuser
POSTGRES_PASSWORD=sup3rS3cr3t!
POSTGRES_DB=myapp_prod
REDIS_PASSWORD=r3disPa$$
APP_SECRET_KEY=a1b2c3d4e5f6...
2

Reference variables in docker-compose.yml

Compose automatically loads the .env file from the same directory and interpolates ${VAR} placeholders.

docker-compose.yml โ€” environment variable interpolation
services:
  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}

  app:
    environment:
      DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
      SECRET_KEY: ${APP_SECRET_KEY}
3

Provide a .env.example for teammates

Commit a .env.example file with placeholder values so developers know what variables are required without exposing real secrets.

.env.example (safe to commit)
POSTGRES_USER=your_db_user
POSTGRES_PASSWORD=your_db_password
POSTGRES_DB=your_db_name
REDIS_PASSWORD=your_redis_password
APP_SECRET_KEY=your_secret_key_here

Volumes: Persisting Data

By default, container filesystems are ephemeral โ€” data disappears when the container is removed. Volumes solve this by storing data outside the container lifecycle. There are two primary volume types in Compose:

Type Syntax Best For Managed By
Named Volume postgres_data:/var/lib/postgresql/data Database files, persistent app data Docker (stored in /var/lib/docker/volumes/)
Bind Mount ./src:/app/src Source code, config files, development hot-reload Host filesystem (your machine)
tmpfs Mount tmpfs: /run/secrets Secrets, temporary in-memory data Host RAM (not persisted)
Development vs Production Volumes

In development, use bind mounts (./src:/app/src) so code changes reflect instantly without rebuilding. In production, bake code into the image and use named volumes only for truly persistent data like database files and uploaded media.

Essential Docker Compose Commands

Daily Compose Commands
# Start all services (detached/background mode)
docker compose up -d

# Start and rebuild images before starting
docker compose up -d --build

# View running containers
docker compose ps

# View logs (all services)
docker compose logs -f

# View logs for a specific service
docker compose logs -f app

# Stop all services (keeps containers)
docker compose stop

# Stop and remove containers + networks
docker compose down

# Stop and remove containers + networks + volumes (โš ๏ธ deletes data!)
docker compose down -v

# Execute a command in a running container
docker compose exec app sh
docker compose exec db psql -U postgres -d myapp

# Scale a service to N replicas
docker compose up -d --scale app=3

# Pull latest images for all services
docker compose pull

# Rebuild a specific service image
docker compose build app

Development vs Production Compose Files

A common pattern is using multiple Compose files: a base docker-compose.yml for shared configuration, a docker-compose.dev.yml for development overrides, and a docker-compose.prod.yml for production settings. Compose merges them with the -f flag.

docker-compose.yml (base โ€” shared)
version: "3.9"

services:
  app:
    build: .
    environment:
      DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
    networks:
      - app-network

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - app-network

volumes:
  postgres_data:

networks:
  app-network:
docker-compose.dev.yml (development overrides)
version: "3.9"

services:
  app:
    # Override: mount source for hot reload
    volumes:
      - .:/app
      - /app/node_modules    # anonymous volume protects node_modules
    environment:
      NODE_ENV: development
    command: npm run dev     # Use nodemon or tsx watch
    ports:
      - "3000:3000"
      - "9229:9229"          # Node.js debugger port

  db:
    # Expose DB port for local DB clients (TablePlus, DBeaver)
    ports:
      - "5432:5432"
docker-compose.prod.yml (production overrides)
version: "3.9"

services:
  app:
    restart: unless-stopped
    environment:
      NODE_ENV: production
    # No ports here โ€” nginx handles incoming traffic
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 512M

  nginx:
    image: nginx:alpine
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./certbot/conf:/etc/letsencrypt:ro
    depends_on:
      - app
Running with merged files
# Development
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d

# Production
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

# Alternatively, set COMPOSE_FILE environment variable
export COMPOSE_FILE=docker-compose.yml:docker-compose.prod.yml
docker compose up -d

Production Deployment Checklist

Before pushing your Compose stack to production, run through these critical checks:

Security Hardening

  • Run containers as non-root users (use USER in Dockerfile)
  • Never expose database ports (5432, 6379) to the host in production
  • Use read-only filesystem: read_only: true in the service definition
  • Pin image tags โ€” use postgres:16.2-alpine not postgres:latest
  • Store secrets in Docker Secrets or a vault (Vault by HashiCorp, AWS Secrets Manager)
  • Enable no-new-privileges: security_opt: ["no-new-privileges:true"]
  • Use internal: true on backend networks to block external access

Performance & Reliability

  • Add healthcheck to all stateful services (db, redis, queues)
  • Set restart: unless-stopped on all services
  • Configure mem_limit and cpus to prevent resource hogging
  • Use named volumes for all data that must survive container restarts
  • Enable PostgreSQL connection pooling via PgBouncer to handle traffic spikes
  • Configure nginx with gzip compression and proper proxy headers
Complete production-hardened service example
services:
  app:
    image: myregistry.com/myapp:1.4.2    # pinned tag from CI/CD
    restart: unless-stopped
    read_only: true
    tmpfs:
      - /tmp                              # writable temp directory
    security_opt:
      - no-new-privileges:true
    user: "1001:1001"
    environment:
      NODE_ENV: production
      DATABASE_URL: ${DATABASE_URL}
    healthcheck:
      test: ["CMD", "wget", "-qO-", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 512M
        reservations:
          memory: 256M
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"
    networks:
      - frontend
      - backend
git push Developer CI/CD Build & push image Registry Docker Hub / ECR Server docker compose pull & up ๐Ÿš€ Live CI/CD โ†’ Docker Registry โ†’ Production Deployment Flow

A typical CI/CD pipeline using Docker Compose for deployment

Zero-Downtime Deployments

Docker Compose itself doesn't have rolling update capabilities (that's Swarm/Kubernetes territory), but you can achieve near-zero downtime with a careful approach:

deploy.sh โ€” production deployment script
#!/bin/bash
set -e

echo "๐Ÿ”„ Pulling latest images..."
docker compose -f docker-compose.yml -f docker-compose.prod.yml pull

echo "๐Ÿš€ Starting new containers..."
# --no-deps: don't restart linked services
# --build: rebuild if using build context
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d --no-deps app

echo "โณ Waiting for health check..."
sleep 10

# Check if container is healthy before finishing
HEALTH=$(docker inspect --format='{{.State.Health.Status}}' myapp)
if [ "$HEALTH" != "healthy" ]; then
  echo "โŒ Container is not healthy! Rolling back..."
  docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d --no-deps --scale app=0 app
  exit 1
fi

echo "๐Ÿงน Removing old images..."
docker image prune -f

echo "โœ… Deployment complete!"

Conclusion

Docker Compose transforms multi-container application management from a fragile series of manual commands into a reliable, declarative workflow. You've learned how to structure a production-grade docker-compose.yml, write optimized Dockerfiles with multi-stage builds, manage secrets securely, separate development and production configurations, and deploy safely.

Key Takeaways

  • Use service names (not IPs or localhost) for inter-container communication
  • Always define healthchecks and use condition: service_healthy in depends_on
  • Never commit .env files โ€” commit .env.example instead
  • Use named volumes for databases and bind mounts for development hot-reload
  • Split configuration into base + dev + prod files for clean environment separation
  • Pin image tags in production โ€” never use :latest in a deployment
  • Run containers as non-root users and apply security options
"Docker Compose makes the 'it works on my machine' problem go away โ€” because everyone's machine runs the same containers."

Once you've mastered Docker Compose, the natural next step is Docker Swarm or Kubernetes for orchestrating containers across multiple hosts with automatic scaling and true rolling deployments. But for the vast majority of applications, Compose provides everything you need to deploy confidently and repeatedly.

Docker Docker Compose Deployment DevOps Containers Production
Mayur Dabhi

Mayur Dabhi

Full Stack Developer with 5+ years of experience building scalable web applications with Laravel, React, and Node.js.