DevOps

Understanding Docker: Containers for Developers

Mayur Dabhi
Mayur Dabhi
February 26, 2026
22 min read

If you've ever uttered the phrase "but it works on my machine!" or spent hours setting up a development environment, Docker is about to change your life. Docker has revolutionized how we build, ship, and run applications by solving one of software development's oldest problems: environment consistency.

In this comprehensive guide, we'll demystify Docker from the ground up. You'll learn the core concepts, master essential commands, and build real-world containerized applications. By the end, you'll understand why Docker has become an indispensable tool for modern developers.

Why Docker Matters

Docker ensures your application runs the exact same way everywhere—your laptop, your colleague's machine, staging servers, and production. No more "works on my machine" excuses. No more dependency nightmares. Just consistent, reproducible environments.

What is Docker?

Docker is a platform that uses containerization technology to package applications and their dependencies into isolated, portable units called containers. Think of containers as lightweight, standalone packages that include everything an application needs to run: code, runtime, libraries, and system tools.

Unlike virtual machines that virtualize hardware and run entire operating systems, containers share the host OS kernel and isolate only the application layer. This makes them incredibly fast to start and efficient with resources.

Virtual Machines vs Docker Containers Virtual Machines Infrastructure (Hardware) Host Operating System Hypervisor Guest OS Bins/Libs App A Guest OS Bins/Libs App B Guest OS Bins/Libs App C Docker Containers Infrastructure (Hardware) Host Operating System Docker Engine Bins/Libs App A Bins/Libs App B Bins/Libs App C Heavy • Slow startup • GB in size Lightweight • Fast startup • MB in size

Virtual machines include entire guest operating systems, while containers share the host kernel

Core Docker Concepts

Before diving into commands, let's understand the fundamental building blocks of Docker. These concepts are essential for working effectively with containers.

Images

Read-only templates used to create containers. Think of them as blueprints.

Containers

Running instances of images. Isolated environments where your app executes.

Dockerfile

A script with instructions to build a Docker image automatically.

Volumes

Persistent data storage that survives container lifecycle.

Networks

Enable communication between containers and the outside world.

Registry

Repository for storing and distributing Docker images (like Docker Hub).

Docker Architecture

Docker uses a client-server architecture. The Docker client communicates with the Docker daemon, which does the heavy lifting of building, running, and distributing containers.

Docker Architecture Docker Client (CLI) docker build docker run docker pull REST API Docker Host (Daemon) Containers Images node:18 nginx:latest postgres:15 Registry (Docker Hub)

Docker client sends commands to the daemon, which manages containers, images, and communicates with registries

Installing Docker

Docker Desktop is the easiest way to get started on Windows and macOS. For Linux, you'll install Docker Engine directly.

  1. Download Docker Desktop for Windows
  2. Run the installer and follow the prompts
  3. Enable WSL 2 when prompted (recommended)
  4. Restart your computer
  5. Open Docker Desktop and wait for it to start

Requires Windows 10/11 64-bit with WSL 2 or Hyper-V enabled.

  1. Download Docker Desktop for Mac
  2. Open the .dmg file and drag Docker to Applications
  3. Launch Docker from Applications
  4. Grant necessary permissions when prompted
  5. Wait for Docker to start (whale icon in menu bar)

Works on both Intel and Apple Silicon Macs.

# Update package index
sudo apt-get update

# Install prerequisites
sudo apt-get install ca-certificates curl gnupg

# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

# Set up the repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Add your user to docker group (to run without sudo)
sudo usermod -aG docker $USER

Verify your installation by running:

Terminal
docker --version
# Docker version 24.0.7, build afdd53b

docker run hello-world
# Hello from Docker! This message shows your installation is working correctly.

Essential Docker Commands

Let's explore the most important Docker commands you'll use daily. We'll organize them by category for easy reference.

Image Commands

docker pull
docker pull nginx:latest

Download an image from a registry (Docker Hub by default)

docker images
docker images

List all locally stored images with their sizes and tags

docker build
docker build -t myapp:1.0 .

Build an image from a Dockerfile in the current directory

docker rmi
docker rmi nginx:latest

Remove one or more images from local storage

Container Commands

docker run
docker run -d -p 8080:80 --name webserver nginx

Create and start a container from an image

docker ps
docker ps -a

List running containers (-a shows all, including stopped)

docker stop / start
docker stop webserver && docker start webserver

Stop or start existing containers

docker exec
docker exec -it webserver /bin/bash

Run a command inside a running container (-it for interactive terminal)

docker logs
docker logs -f webserver

View container logs (-f follows/streams new logs)

Docker Run Flags Explained
  • -d — Run in detached mode (background)
  • -p 8080:80 — Map host port 8080 to container port 80
  • --name — Assign a custom name to the container
  • -v — Mount a volume for persistent data
  • -e — Set environment variables
  • --rm — Automatically remove container when it exits

Writing Dockerfiles

A Dockerfile is a text file containing instructions to build a Docker image. Each instruction creates a new layer in the image, making builds efficient through caching.

Docker Image Layers FROM node:18-alpine Base Image (Read-only) WORKDIR /app Layer 2 (Read-only) COPY package*.json ./ Layer 3 (Read-only) RUN npm install Layer 4 (Read-only) Container Layer Writable Layer

Each Dockerfile instruction creates a cached layer. The container adds a writable layer on top.

Let's create a Dockerfile for a Node.js application:

Dockerfile
# Use official Node.js image as base
FROM node:18-alpine

# Set working directory inside container
WORKDIR /app

# Copy package files first (for better caching)
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy application source code
COPY . .

# Expose the port the app runs on
EXPOSE 3000

# Define environment variable
ENV NODE_ENV=production

# Command to run the application
CMD ["node", "server.js"]
Dockerfile Best Practices
  • Order matters for caching — Put frequently changing instructions (like COPY source) at the end
  • Use specific tagsnode:18-alpine instead of node:latest
  • Combine RUN commands — Reduces layers: RUN apt-get update && apt-get install -y curl
  • Use .dockerignore — Exclude node_modules, .git, etc. from build context
  • Use multi-stage builds — For smaller production images

Multi-Stage Builds

Multi-stage builds let you use multiple FROM statements to create lean production images by discarding build-time dependencies:

Dockerfile (Multi-Stage)
# Stage 1: Build
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Production
FROM node:18-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/server.js"]

Docker Volumes: Persistent Data

By default, data inside containers is ephemeral—it disappears when the container is removed. Volumes provide persistent storage that survives container lifecycle.

Docker Volumes Container A /app/data Container B /app/data Named Volume my-data-volume

Multiple containers can share the same volume for data persistence and sharing

Volume Commands
# Create a named volume
docker volume create my-data

# Run container with volume mounted
docker run -d \
  --name postgres-db \
  -v my-data:/var/lib/postgresql/data \
  -e POSTGRES_PASSWORD=secret \
  postgres:15

# List volumes
docker volume ls

# Inspect volume details
docker volume inspect my-data

# Remove unused volumes
docker volume prune

Volume Types

Type Syntax Use Case
Named Volume -v myvolume:/path Production data, databases
Bind Mount -v /host/path:/container/path Development, config files
tmpfs Mount --tmpfs /path Sensitive data, cache

Docker Networking

Docker creates isolated networks for containers. Understanding networking is crucial for multi-container applications.

Docker Network Types Bridge Network (Default) App DB docker0 bridge Host Network Container Host Network Stack None Network Container (Isolated) No network access
Network Commands
# Create a custom bridge network
docker network create my-app-network

# Run containers on the same network
docker run -d --name api --network my-app-network node-api
docker run -d --name db --network my-app-network postgres:15

# Containers can reach each other by name!
# From api container: postgres://db:5432

# List networks
docker network ls

# Inspect network details
docker network inspect my-app-network

# Connect running container to network
docker network connect my-app-network existing-container
Container DNS

Containers on the same custom bridge network can resolve each other by container name. No need to hardcode IP addresses! This is Docker's built-in DNS service.

Docker Compose: Multi-Container Apps

Docker Compose lets you define and manage multi-container applications using a single YAML file. It's perfect for development environments and simple deployments.

1

Define Services

Create a docker-compose.yml file describing your application's services, networks, and volumes.

2

Build and Run

Use docker compose up to build images and start all services with a single command.

3

Scale and Manage

Easily scale services, view logs, and manage the entire stack as a unit.

Here's a complete example for a web application with a database:

docker-compose.yml
version: '3.8'

services:
  # Node.js API server
  api:
    build: ./api
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgres://user:password@db:5432/myapp
    volumes:
      - ./api:/app
      - /app/node_modules
    depends_on:
      - db
    networks:
      - app-network

  # React frontend
  frontend:
    build: ./frontend
    ports:
      - "5173:5173"
    volumes:
      - ./frontend:/app
      - /app/node_modules
    depends_on:
      - api
    networks:
      - app-network

  # PostgreSQL database
  db:
    image: postgres:15-alpine
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=myapp
    volumes:
      - postgres-data:/var/lib/postgresql/data
    networks:
      - app-network

  # Redis cache
  redis:
    image: redis:7-alpine
    networks:
      - app-network

networks:
  app-network:
    driver: bridge

volumes:
  postgres-data:
Docker Compose Commands
# Start all services (build if needed)
docker compose up -d

# View logs from all services
docker compose logs -f

# View logs from specific service
docker compose logs -f api

# Stop all services
docker compose stop

# Stop and remove containers, networks
docker compose down

# Rebuild and restart a specific service
docker compose up -d --build api

# Scale a service
docker compose up -d --scale api=3

# Execute command in running service
docker compose exec api npm test

Docker Security Best Practices

Security should be a priority when working with containers. Here are essential practices to keep your Docker environment secure:

Don't Run as Root

By default, containers run as root. Create a non-root user in your Dockerfile:

# Create non-root user
RUN addgroup -g 1001 appgroup && \
    adduser -u 1001 -G appgroup -D appuser

# Switch to non-root user
USER appuser

Use Official & Minimal Images

Prefer official images and use minimal variants like Alpine:

  • node:18-alpine instead of node:18
  • python:3.11-slim instead of python:3.11
  • Smaller images = smaller attack surface

Never Hardcode Secrets

Don't put secrets in Dockerfiles or images:

  • Use environment variables at runtime
  • Use Docker secrets for Swarm/Kubernetes
  • Use secret management tools (Vault, AWS Secrets Manager)
# Good: Pass at runtime
docker run -e DATABASE_PASSWORD=$DB_PASS myapp

# Better: Use .env file (not committed!)
docker compose --env-file .env.local up

Keep Images Updated

Regularly update base images to get security patches:

# Scan images for vulnerabilities
docker scout cve myimage:latest

# Force pull latest base image
docker build --pull -t myapp .

Real-World Example: Full Stack App

Let's put everything together with a complete full-stack application setup:

Project Structure
my-fullstack-app/
├── docker-compose.yml
├── docker-compose.prod.yml
├── .env.example
├── api/
│   ├── Dockerfile
│   ├── package.json
│   └── src/
├── frontend/
│   ├── Dockerfile
│   ├── package.json
│   └── src/
└── nginx/
    └── nginx.conf
api/Dockerfile
FROM node:18-alpine

# Create non-root user
RUN addgroup -g 1001 nodejs && adduser -u 1001 -G nodejs -D nodejs

WORKDIR /app

# Copy package files
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy source code
COPY --chown=nodejs:nodejs . .

# Switch to non-root user
USER nodejs

EXPOSE 3000

CMD ["node", "src/index.js"]
frontend/Dockerfile
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Summary: Docker Command Cheat Sheet

Task Command
Build image docker build -t name:tag .
Run container docker run -d -p 8080:80 name
List containers docker ps -a
View logs docker logs -f container
Enter container docker exec -it container sh
Stop container docker stop container
Remove container docker rm container
Remove image docker rmi image:tag
Cleanup everything docker system prune -a
Compose up docker compose up -d
Compose down docker compose down
Next Steps

Now that you understand Docker basics, here's what to explore next:

  • Docker Swarm — Native container orchestration
  • Kubernetes — Industry-standard orchestration platform
  • CI/CD Integration — Automate builds with GitHub Actions, GitLab CI
  • Container Registries — Push images to Docker Hub, AWS ECR, or GitHub Container Registry

Docker has transformed how we develop, test, and deploy applications. By containerizing your applications, you gain consistency across environments, isolation between services, and portability across any infrastructure. Whether you're building microservices or simply want to eliminate "works on my machine" problems, Docker is an essential tool in every developer's toolkit.

Start small—containerize one application, get comfortable with the workflow, then gradually adopt Docker Compose for multi-service setups. Before you know it, you'll wonder how you ever developed without containers!

Docker Containers DevOps Docker Compose Microservices
Mayur Dabhi

Mayur Dabhi

Full-stack developer passionate about clean code, modern web technologies, and sharing knowledge with the developer community.