DevOps

Nginx Configuration for Web Apps

Mayur Dabhi
Mayur Dabhi
April 23, 2026
14 min read

Nginx (pronounced "engine-x") powers over 34% of all websites on the internet, including some of the highest-traffic sites in the world — Netflix, Airbnb, GitHub, and Dropbox all rely on it. Originally created by Igor Sysoev in 2004 to solve the C10K problem (handling 10,000 concurrent connections), Nginx has evolved into a Swiss army knife for modern web infrastructure: HTTP server, reverse proxy, load balancer, mail proxy, and generic TCP/UDP proxy in a single, lightweight package.

What separates Nginx from older servers like Apache is its event-driven, asynchronous architecture. Instead of spawning a new thread or process per connection, Nginx handles thousands of connections within a single worker process using non-blocking I/O. This translates directly into lower memory consumption and dramatically higher throughput under load.

When to Choose Nginx

Nginx excels at serving static files, acting as a reverse proxy in front of application servers (Node.js, PHP-FPM, Gunicorn), and terminating SSL. If your stack involves a Node.js or Laravel backend, Nginx in front of it is the standard production pattern — not optional.

Installation and Core Concepts

Nginx is available in all major Linux package managers. On Ubuntu/Debian, the official Nginx PPA gives you the latest stable or mainline release rather than the OS-packaged version (which can be years behind).

1

Install on Ubuntu/Debian

Add the official Nginx repository for the latest stable version, then install via apt.

Terminal — Ubuntu/Debian
# Install from official Nginx repo (latest stable)
curl -fsSL https://nginx.org/keys/nginx_signing.key | sudo gpg --dearmor \
    -o /usr/share/keyrings/nginx-archive-keyring.gpg

echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] \
http://nginx.org/packages/ubuntu $(lsb_release -cs) nginx" \
    | sudo tee /etc/apt/sources.list.d/nginx.list

sudo apt update && sudo apt install nginx -y

# Verify and start
nginx -v
sudo systemctl enable --now nginx
sudo systemctl status nginx
2

Install on CentOS/RHEL/Amazon Linux

Use the official Nginx RPM repository for rpm-based distros.

Terminal — CentOS/RHEL
# Create repo file
sudo tee /etc/yum.repos.d/nginx.repo <<'EOF'
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
EOF

sudo yum install nginx -y
sudo systemctl enable --now nginx

Key File Paths

File/Directory Purpose
/etc/nginx/nginx.conf Main configuration file — global settings and includes
/etc/nginx/conf.d/ Drop-in server block configs (*.conf files auto-included)
/etc/nginx/sites-available/ Available virtual hosts (Debian/Ubuntu convention)
/etc/nginx/sites-enabled/ Symlinks to active sites from sites-available
/var/log/nginx/access.log All incoming request logs
/var/log/nginx/error.log Error and diagnostic logs
/var/www/html/ Default document root for static files

Understanding nginx.conf Structure

Nginx's configuration is organized in a hierarchy of contexts. Directives in an outer context are inherited by inner contexts unless overridden.

main context worker_processes, error_log, pid events { } worker_connections http { } gzip, log_format, include server { } listen, server_name location / { } location /api { } server { } port 443, SSL location / { } location ~ \.php$ { }

Nginx configuration context hierarchy

Server Blocks (Virtual Hosts)

A server block is Nginx's equivalent of Apache's VirtualHost — it defines how Nginx handles requests for a particular domain or IP. You can run dozens of websites on a single server, each with its own server block.

Static Site Server Block

/etc/nginx/conf.d/example.com.conf
server {
    listen 80;
    listen [::]:80;               # IPv6

    server_name example.com www.example.com;

    root /var/www/example.com/public;
    index index.html index.htm;

    # Serve static files, 404 for missing assets
    location / {
        try_files $uri $uri/ =404;
    }

    # Cache static assets aggressively
    location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        access_log off;
    }

    # Deny access to hidden files (.git, .env)
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

    # Custom error pages
    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;

    # Logging
    access_log /var/log/nginx/example.com.access.log;
    error_log  /var/log/nginx/example.com.error.log warn;
}
Testing Your Config

Always validate your configuration before reloading. sudo nginx -t parses the full config and reports any syntax errors without touching the running process. Then apply changes with sudo nginx -s reload — zero downtime, no connection drops.

PHP-FPM Server Block (Laravel / WordPress)

/etc/nginx/conf.d/laravel-app.conf
server {
    listen 80;
    server_name myapp.com www.myapp.com;

    root /var/www/myapp/public;
    index index.php index.html;

    # Laravel: route all requests through index.php
    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    # Pass PHP files to PHP-FPM
    location ~ \.php$ {
        fastcgi_pass unix:/run/php/php8.2-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        include fastcgi_params;

        # Increase timeouts for long-running requests
        fastcgi_read_timeout 300;
        fastcgi_send_timeout 300;
    }

    # Block access to sensitive Laravel files
    location ~ /\.(env|git|htaccess) {
        deny all;
    }

    # Serve storage files (public disk)
    location /storage {
        alias /var/www/myapp/storage/app/public;
        try_files $uri $uri/ =404;
    }

    client_max_body_size 64M;    # Allow large file uploads
}

SSL/TLS Configuration

Serving traffic over HTTPS is no longer optional. Modern browsers mark HTTP sites as "Not Secure", search engines penalize them in rankings, and many browser APIs (service workers, geolocation, camera) are restricted to secure contexts. Let's Encrypt makes this free and automated.

1

Install Certbot

Certbot is the official Let's Encrypt client. The snap package is recommended for the latest version.

Terminal — Install Certbot
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot

# Obtain and install certificate (Nginx plugin handles everything)
sudo certbot --nginx -d example.com -d www.example.com

# Test auto-renewal
sudo certbot renew --dry-run
2

Harden Your SSL Configuration

Certbot installs a basic SSL config, but you should harden it with modern cipher suites and security headers.

/etc/nginx/conf.d/example.com.conf — Production SSL
# Redirect all HTTP → HTTPS
server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com www.example.com;

    # Certificates (set by certbot)
    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Modern TLS settings (A+ on SSL Labs)
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers off;

    # Session resumption (performance)
    ssl_session_timeout 1d;
    ssl_session_cache shared:MozSSL:10m;
    ssl_session_tickets off;

    # OCSP stapling (faster cert validation for clients)
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
    resolver 1.1.1.1 8.8.8.8 valid=300s;

    # Security headers
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
    add_header X-Frame-Options DENY always;
    add_header X-Content-Type-Options nosniff always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;

    root /var/www/example.com/public;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }
}
HSTS Warning

The Strict-Transport-Security header with preload commits your domain to HTTPS-only in browsers' hardcoded lists. This is permanent and can take months to undo. Only add preload when you're certain you'll maintain HTTPS forever on this domain and all subdomains.

Nginx as a Reverse Proxy

A reverse proxy sits between external clients and your backend application servers. Nginx accepts all incoming connections and forwards them to the appropriate backend — a Node.js process, a Python Gunicorn server, a PHP-FPM pool, or even another internal service. This is the most common production pattern for web applications.

Internet Clients :443 Nginx SSL termination Reverse Proxy Static Files Node.js App localhost:3000 PHP-FPM unix socket Python API localhost:8000

Nginx as a reverse proxy in front of multiple backends

Proxy all requests to a Node.js application on port 3000:

upstream nodejs_backend {
    server 127.0.0.1:3000;
    keepalive 32;        # Reuse connections to backend
}

server {
    listen 443 ssl http2;
    server_name api.example.com;

    # ... SSL config here ...

    location / {
        proxy_pass         http://nodejs_backend;
        proxy_http_version 1.1;

        # Required for WebSocket support
        proxy_set_header Upgrade    $http_upgrade;
        proxy_set_header Connection "upgrade";

        proxy_set_header Host       $host;
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_read_timeout    90s;
        proxy_connect_timeout 90s;
        proxy_send_timeout    90s;
    }
}

Proxy to a Python app served by Gunicorn with a Unix socket:

upstream gunicorn_backend {
    server unix:/run/gunicorn/gunicorn.sock fail_timeout=0;
}

server {
    listen 443 ssl http2;
    server_name app.example.com;

    # ... SSL config here ...

    location / {
        proxy_pass http://gunicorn_backend;
        proxy_http_version 1.1;

        proxy_set_header Host               $http_host;
        proxy_set_header X-Real-IP          $remote_addr;
        proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto  $scheme;

        # Buffer settings for slower backends
        proxy_buffering       on;
        proxy_buffer_size     128k;
        proxy_buffers         4 256k;
        proxy_busy_buffers_size 256k;
    }

    # Serve Django/Flask static files directly (bypass app server)
    location /static/ {
        alias /var/www/myapp/staticfiles/;
        expires 1y;
        add_header Cache-Control "public, immutable";
    }
}

Critical proxy headers and what they do:

# Your application needs the real client IP, not Nginx's IP
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

# Tell backend whether the original request was HTTP or HTTPS
proxy_set_header X-Forwarded-Proto $scheme;

# Pass the original Host header (important for virtual hosting)
proxy_set_header Host $host;

# If backend needs the original port
proxy_set_header X-Forwarded-Port $server_port;

# In Laravel, set trusted proxies in App\Http\Middleware\TrustProxies
# or config/trustedproxies.php to use these headers correctly.
# Without trusted proxies, request()->ip() returns Nginx's IP.

Load Balancing

When a single application server instance isn't enough — either for redundancy or raw capacity — Nginx's upstream module distributes traffic across multiple backend instances. This is the foundation of horizontal scaling.

Load Balancing Algorithms

Method Directive Best For
Round Robin (default, no directive) Stateless apps where all servers are identical
Least Connections least_conn; Requests with varying processing times (API, DB-heavy)
IP Hash ip_hash; Session-based apps that need sticky sessions
Weighted server ... weight=N; Mixed server capacities (route more traffic to larger nodes)
Load Balancing Configuration
upstream app_cluster {
    # Least connections: routes to server with fewest active connections
    least_conn;

    # Three backend nodes
    server 10.0.1.10:3000 weight=3;   # Primary — receives 3x traffic
    server 10.0.1.11:3000 weight=2;   # Secondary
    server 10.0.1.12:3000 weight=1;   # Tertiary (smaller instance)

    # Backup server — only used if all primaries are down
    server 10.0.1.13:3000 backup;

    # Health check params
    # max_fails: mark server down after N failures
    # fail_timeout: how long to mark it as down, then retry
    server 10.0.1.14:3000 max_fails=3 fail_timeout=30s;

    # Keep alive connections to backends
    keepalive 64;
}

server {
    listen 443 ssl http2;
    server_name app.example.com;

    location / {
        proxy_pass http://app_cluster;
        proxy_http_version 1.1;
        proxy_set_header Connection "";    # Required for keepalive upstream
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Performance Optimization

Out of the box, Nginx is already fast. But with the right tuning in nginx.conf, you can squeeze dramatically more throughput — especially for high-traffic applications.

/etc/nginx/nginx.conf — Production Tuning
# Match worker count to CPU cores
worker_processes auto;

# Maximum open files per worker (must be >= worker_connections)
worker_rlimit_nofile 65535;

events {
    # Max simultaneous connections per worker
    worker_connections 4096;

    # Accept all queued connections at once (multi_accept on)
    multi_accept on;

    # Use Linux's most efficient I/O method
    use epoll;
}

http {
    # ── Basics ───────────────────────────────────────────
    sendfile           on;    # Kernel-level file transfer (bypasses userspace)
    tcp_nopush         on;    # Send headers in one packet (works with sendfile)
    tcp_nodelay        on;    # Disable Nagle for low-latency
    keepalive_timeout  65;
    keepalive_requests 1000;
    types_hash_max_size 2048;
    server_tokens      off;   # Hide Nginx version from response headers

    # ── MIME Types ───────────────────────────────────────
    include      /etc/nginx/mime.types;
    default_type application/octet-stream;

    # ── Buffers ──────────────────────────────────────────
    client_body_buffer_size    128k;
    client_max_body_size       10m;
    client_header_buffer_size  1k;
    large_client_header_buffers 4 16k;
    output_buffers             1 32k;
    postpone_output            1460;

    # ── Gzip Compression ─────────────────────────────────
    gzip              on;
    gzip_vary         on;
    gzip_proxied      any;
    gzip_comp_level   6;      # 1 (fastest) to 9 (best compression) — 6 is sweet spot
    gzip_types
        text/plain
        text/css
        text/javascript
        application/javascript
        application/json
        application/xml
        application/rss+xml
        image/svg+xml
        font/woff2;

    # ── Open File Cache ───────────────────────────────────
    # Cache file descriptors — avoids repeated stat() syscalls
    open_file_cache          max=200000 inactive=20s;
    open_file_cache_valid    30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors   on;

    # ── Logging ───────────────────────────────────────────
    access_log /var/log/nginx/access.log combined buffer=512k flush=1m;
    error_log  /var/log/nginx/error.log warn;

    include /etc/nginx/conf.d/*.conf;
}

Rate Limiting — Protect Your Endpoints

Rate limiting in Nginx uses a leaky-bucket algorithm. Define a zone in the http block, then apply it to specific locations:

http {
    # Define a rate limit zone: 10 req/sec per IP, 10MB state storage
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
    limit_req_zone $binary_remote_addr zone=login_limit:10m rate=5r/m;
}

server {
    # Apply general rate limit with burst tolerance
    location /api/ {
        limit_req zone=api_limit burst=20 nodelay;
        limit_req_status 429;
        proxy_pass http://backend;
    }

    # Stricter limit for login endpoint (brute force protection)
    location /api/auth/login {
        limit_req zone=login_limit burst=5;
        limit_req_status 429;
        proxy_pass http://backend;
    }
}

The burst parameter allows a temporary spike above the rate. nodelay processes burst requests immediately rather than queuing them — important for APIs where clients retry quickly.

Security Hardening

A well-configured Nginx installation is your first line of defense. Beyond SSL, several configuration patterns significantly reduce your attack surface.

Security Configuration Snippets
server {
    # ── Restrict HTTP Methods ─────────────────────────────
    if ($request_method !~ ^(GET|HEAD|POST|PUT|PATCH|DELETE|OPTIONS)$ ) {
        return 405;
    }

    # ── Block Common Exploits ─────────────────────────────
    # Block SQL injection attempts in query string
    if ($query_string ~* "union.*select|insert.*into|drop.*table") {
        return 403;
    }

    # Block requests with invalid characters in URI
    if ($request_uri ~* "[;|`&\$]") {
        return 400;
    }

    # ── Geo Blocking (requires ngx_http_geoip2_module) ────
    # geo $blocked_country {
    #     default 0;
    #     CN 1;   # Block China
    #     RU 1;   # Block Russia
    # }
    # if ($blocked_country) { return 444; }  # 444 = no response

    # ── Deny Bot User Agents ──────────────────────────────
    if ($http_user_agent ~* (scrapy|python-requests|libwww-perl|nikto|sqlmap)) {
        return 403;
    }

    # ── Limit Request Size ────────────────────────────────
    client_max_body_size 10m;
    client_body_timeout  10s;
    client_header_timeout 10s;
    send_timeout         10s;

    # ── Buffer Overflow Protection ────────────────────────
    client_body_buffer_size    1k;
    client_header_buffer_size  1k;
    large_client_header_buffers 2 1k;
}
Nginx -t Is Your Best Friend

After any configuration change, run sudo nginx -t && sudo nginx -s reload as a single command. The && ensures Nginx only reloads if the test passes — preventing you from accidentally taking down a production server with a typo.

Conclusion and Key Takeaways

Nginx is a remarkably versatile tool that most developers interact with daily without realizing it. Once you understand the core mental model — contexts, server blocks, location matching — the configuration language becomes intuitive. What seems like dense DSL on first read is actually a handful of reusable patterns.

Key Takeaways

  • Architecture matters: Nginx's event-driven model makes it radically more memory-efficient than process-per-connection servers under load
  • Always test before reload: nginx -t before every nginx -s reload, no exceptions
  • SSL is non-negotiable: Use Let's Encrypt + Certbot; add HSTS and OCSP stapling for production
  • Reverse proxy pattern: Nginx handles SSL, static files, and rate limiting; your app server handles only business logic
  • Gzip and caching: Enable gzip compression and aggressive cache headers for static assets — huge wins for free
  • Rate limiting: Protect login endpoints and APIs from brute-force and DDoS at the web server level before requests hit your app
  • Hide server version: server_tokens off; removes information that helps attackers target known vulnerabilities
"Nginx is not just a web server — it's a building block. Understanding it is the difference between a developer who deploys and a developer who architects."

The configurations in this guide represent production-tested patterns used across thousands of deployments. Start with the server block for your stack (Laravel/PHP-FPM or Node.js), layer on SSL with Certbot, then add the performance tuning from the nginx.conf section. Each piece is independent and incremental — you don't need to do everything at once.

Nginx Server Configuration DevOps SSL Reverse Proxy Load Balancing
Mayur Dabhi

Mayur Dabhi

Full Stack Developer with 5+ years of experience building scalable web applications with Laravel, React, and Node.js.