⚡ CACHE LAYER Redis Browser CDN App
Performance

Caching Strategies for Web Applications

Mayur Dabhi
Mayur Dabhi
March 24, 2026
24 min read

Caching is one of the most powerful techniques for improving web application performance. A well-designed caching strategy can reduce database load by 90%, cut response times from seconds to milliseconds, and handle traffic spikes without breaking a sweat. Whether you're building a small API or a high-traffic e-commerce platform, understanding caching is essential for every web developer.

In this comprehensive guide, we'll explore every layer of caching—from browser caching and CDNs to in-memory stores like Redis and Memcached. You'll learn when to use each approach, how to implement them in popular frameworks, and most importantly, how to handle the trickiest part: cache invalidation.

What You'll Learn
  • The caching pyramid: browser, CDN, application, and database caching
  • Redis vs Memcached: choosing the right in-memory store
  • HTTP caching headers: Cache-Control, ETag, and Last-Modified
  • Cache invalidation strategies that actually work
  • Implementing caching in Laravel, Node.js, and React
  • Common caching pitfalls and how to avoid them
  • Measuring and optimizing cache hit rates

The Caching Pyramid

Modern web applications use multiple layers of caching, each serving a different purpose. Understanding when data is cached at each layer—and how long it stays there—is crucial for building fast, reliable applications.

The Caching Pyramid: From Browser to Database Browser Cache ~0ms | localStorage, sessionStorage, HTTP cache Cache Miss CDN / Edge Cache ~10-50ms | Cloudflare, AWS CloudFront, Fastly Cache Miss Application Cache (Redis / Memcached) ~1-5ms | In-memory key-value stores Cache Miss Database Query Cache ~5-20ms | MySQL query cache, PostgreSQL shared buffers Cache Miss Database (Disk I/O) — ~50-500ms ⚡ FASTEST 🐢 SLOWEST

Each cache layer is faster than the one below it. The goal is to serve as many requests as possible from the upper layers.

Browser Cache

Stores assets locally in the user's browser. Zero network latency, but limited to individual users.

CDN Cache

Distributes cached content across edge servers worldwide. Great for static assets and API responses.

Application Cache

In-memory stores like Redis handle dynamic data, sessions, and computed results.

Database Cache

Query caches and buffer pools reduce disk reads for frequently accessed data.

Redis vs Memcached

When it comes to application-level caching, Redis and Memcached are the two dominant choices. Both are blazingly fast in-memory stores, but they serve different use cases.

Feature Redis Memcached
Data Structures Strings, Lists, Sets, Hashes, Sorted Sets, Streams Strings only
Persistence ✅ RDB snapshots, AOF logs ❌ None (pure cache)
Pub/Sub ✅ Built-in messaging ❌ Not supported
Clustering ✅ Redis Cluster with auto-sharding ✅ Client-side sharding
Memory Efficiency Good (slightly higher overhead) Excellent (minimal overhead)
Best For Sessions, queues, leaderboards, real-time features Simple key-value caching at scale
Quick Decision Guide

Choose Redis if: You need data persistence, pub/sub, complex data structures, or Lua scripting.

Choose Memcached if: You need pure caching with maximum memory efficiency and predictable performance.

Redis Implementation Examples

cache.js
import Redis from 'ioredis';

const redis = new Redis({
  host: 'localhost',
  port: 6379,
  maxRetriesPerRequest: 3
});

// Cache-aside pattern implementation
async function getUser(userId) {
  const cacheKey = `user:${userId}`;
  
  // 1. Try cache first
  const cached = await redis.get(cacheKey);
  if (cached) {
    console.log('Cache HIT');
    return JSON.parse(cached);
  }
  
  // 2. Cache miss - fetch from database
  console.log('Cache MISS');
  const user = await db.users.findById(userId);
  
  // 3. Store in cache with 1 hour TTL
  await redis.setex(cacheKey, 3600, JSON.stringify(user));
  
  return user;
}

// Cache invalidation on update
async function updateUser(userId, data) {
  await db.users.update(userId, data);
  await redis.del(`user:${userId}`);  // Invalidate cache
}
UserController.php
<?php

namespace App\Http\Controllers;

use App\Models\User;
use Illuminate\Support\Facades\Cache;

class UserController extends Controller
{
    public function show($id)
    {
        // Remember pattern - cache for 1 hour
        $user = Cache::remember("user:{$id}", 3600, function () use ($id) {
            return User::with('profile', 'posts')->findOrFail($id);
        });
        
        return response()->json($user);
    }
    
    public function update(Request $request, $id)
    {
        $user = User::findOrFail($id);
        $user->update($request->validated());
        
        // Invalidate cache
        Cache::forget("user:{$id}");
        
        // Also invalidate any list caches
        Cache::tags(['users'])->flush();
        
        return response()->json($user);
    }
    
    // Cache with tags for easier invalidation
    public function index()
    {
        return Cache::tags(['users'])->remember('users:all', 600, function () {
            return User::paginate(20);
        });
    }
}
cache.py
import redis
import json
from functools import wraps

r = redis.Redis(host='localhost', port=6379, db=0)

def cached(ttl=3600, prefix='cache'):
    """Decorator for caching function results"""
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            # Generate cache key from function name and arguments
            key = f"{prefix}:{func.__name__}:{hash(str(args) + str(kwargs))}"
            
            # Try to get from cache
            cached_result = r.get(key)
            if cached_result:
                return json.loads(cached_result)
            
            # Execute function and cache result
            result = func(*args, **kwargs)
            r.setex(key, ttl, json.dumps(result))
            return result
        return wrapper
    return decorator

@cached(ttl=1800, prefix='api')
def get_user_stats(user_id: int) -> dict:
    """Expensive computation cached for 30 minutes"""
    # Simulate expensive database queries
    return {
        'user_id': user_id,
        'total_orders': calculate_orders(user_id),
        'lifetime_value': calculate_ltv(user_id),
        'engagement_score': calculate_engagement(user_id)
    }

HTTP Caching Headers

Browser caching is controlled through HTTP headers. Understanding these headers is crucial for optimizing the delivery of static assets and API responses.

HTTP Caching Flow Browser 💻 GET /api/data Server 🖥️ Cache-Control: max-age=3600 ETag: "abc123" ⏱️ Within max-age: Browser serves from cache (no request) Browser 💻 If-None-Match: "abc123" Server 🖥️ 304 Not Modified (no body - use cache) Key Headers: Cache-Control: max-age, no-cache, no-store, private, public ETag: Content hash for revalidation | Last-Modified: Timestamp-based revalidation

ETags and conditional requests enable efficient cache revalidation without transferring the full response

Cache-Control Directives

HTTP Headers
# Cache for 1 hour in browser and shared caches (CDN)
Cache-Control: public, max-age=3600

# Cache for 1 day, but revalidate after 1 hour
Cache-Control: public, max-age=86400, stale-while-revalidate=3600

# Private cache only (not CDN), requires revalidation
Cache-Control: private, no-cache

# Never cache (sensitive data)
Cache-Control: no-store

# Immutable - never revalidate (for versioned assets)
Cache-Control: public, max-age=31536000, immutable
Common Mistake

no-cache doesn't mean "don't cache"! It means "cache, but always revalidate before using." Use no-store to completely prevent caching.

Setting Headers in Your Application

server.js
import express from 'express';
import crypto from 'crypto';

const app = express();

// Static assets - long cache with versioning
app.use('/static', express.static('public', {
  maxAge: '1y',
  immutable: true,
  etag: true
}));

// API endpoint with ETag support
app.get('/api/products/:id', async (req, res) => {
  const product = await getProduct(req.params.id);
  
  // Generate ETag from content
  const etag = crypto
    .createHash('md5')
    .update(JSON.stringify(product))
    .digest('hex');
  
  // Check if client has current version
  if (req.headers['if-none-match'] === etag) {
    return res.status(304).end();
  }
  
  res.set({
    'Cache-Control': 'private, max-age=60',
    'ETag': etag
  });
  
  res.json(product);
});
nginx.conf
server {
    # Static assets with long cache
    location /static/ {
        expires 1y;
        add_header Cache-Control "public, immutable";
        add_header Vary "Accept-Encoding";
    }
    
    # Images with medium cache
    location ~* \.(jpg|jpeg|png|gif|webp|svg|ico)$ {
        expires 30d;
        add_header Cache-Control "public";
    }
    
    # API responses - short cache with revalidation
    location /api/ {
        add_header Cache-Control "private, max-age=60, stale-while-revalidate=300";
        add_header Vary "Authorization, Accept";
        
        proxy_pass http://backend;
        proxy_cache api_cache;
        proxy_cache_valid 200 1m;
        proxy_cache_use_stale error timeout updating;
    }
    
    # HTML - no cache (always fresh)
    location ~* \.html$ {
        add_header Cache-Control "no-cache";
    }
}
app/api/products/[id]/route.ts
import { NextResponse } from 'next/server';

export async function GET(
  request: Request,
  { params }: { params: { id: string } }
) {
  const product = await getProduct(params.id);
  
  // Generate ETag
  const etag = generateETag(product);
  
  // Check conditional request
  const ifNoneMatch = request.headers.get('if-none-match');
  if (ifNoneMatch === etag) {
    return new Response(null, { status: 304 });
  }
  
  return NextResponse.json(product, {
    headers: {
      'Cache-Control': 'public, s-maxage=60, stale-while-revalidate=300',
      'ETag': etag,
      'Vary': 'Accept-Encoding'
    }
  });
}

// For static generation with ISR
export const revalidate = 60; // Revalidate every 60 seconds

Cache Invalidation Strategies

As Phil Karlton famously said: "There are only two hard things in Computer Science: cache invalidation and naming things." Let's tackle the first one with proven strategies.

Cache Invalidation Strategies Time-To-Live (TTL) Cache expires after set time ✅ Simple | ⚠️ Stale data window Event-Based Invalidate on data change ✅ Always fresh | ⚠️ Complex Cache Versioning New version = new cache key ✅ No stale reads | ⚠️ Memory Write-Through Update cache on every write ✅ Consistent | ⚠️ Write latency Cache Tags Group & invalidate related keys ✅ Flexible | ⚠️ Redis only Stale-While-Revalidate Serve stale, update in background ✅ Fast | ⚠️ Brief stale window

Choose your invalidation strategy based on data freshness requirements and system complexity

Event-Based Invalidation with Redis Pub/Sub

cache-invalidation.js
import Redis from 'ioredis';

const publisher = new Redis();
const subscriber = new Redis();

// Publisher: When data changes
async function updateProduct(productId, data) {
  // 1. Update database
  await db.products.update(productId, data);
  
  // 2. Publish invalidation event
  await publisher.publish('cache:invalidate', JSON.stringify({
    type: 'product',
    id: productId,
    keys: [
      `product:${productId}`,
      `product:${productId}:details`,
      'products:list',
      `category:${data.categoryId}:products`
    ]
  }));
}

// Subscriber: Listen for invalidation events
subscriber.subscribe('cache:invalidate');

subscriber.on('message', async (channel, message) => {
  const { type, id, keys } = JSON.parse(message);
  
  console.log(`Invalidating cache for ${type}:${id}`);
  
  // Delete all related cache keys
  if (keys.length > 0) {
    await redis.del(...keys);
  }
  
  // Optionally warm the cache immediately
  if (type === 'product') {
    await warmProductCache(id);
  }
});

// Cache warming function
async function warmProductCache(productId) {
  const product = await db.products.findById(productId);
  await redis.setex(`product:${productId}`, 3600, JSON.stringify(product));
}

Cache Tags Pattern (Laravel-style)

tagged-cache.js
class TaggedCache {
  constructor(redis) {
    this.redis = redis;
  }
  
  // Store value with tags
  async set(key, value, ttl, tags = []) {
    const pipeline = this.redis.pipeline();
    
    // Store the actual value
    pipeline.setex(key, ttl, JSON.stringify(value));
    
    // Associate key with each tag
    for (const tag of tags) {
      pipeline.sadd(`tag:${tag}`, key);
      pipeline.expire(`tag:${tag}`, ttl + 60); // Tags expire slightly later
    }
    
    await pipeline.exec();
  }
  
  // Invalidate all keys with a specific tag
  async invalidateTag(tag) {
    const keys = await this.redis.smembers(`tag:${tag}`);
    
    if (keys.length > 0) {
      await this.redis.del(...keys);
      await this.redis.del(`tag:${tag}`);
    }
    
    return keys.length;
  }
}

// Usage
const cache = new TaggedCache(redis);

// Store product with multiple tags
await cache.set(
  `product:${product.id}`,
  product,
  3600,
  ['products', `category:${product.categoryId}`, `brand:${product.brandId}`]
);

// When a category is updated, invalidate all its products
await cache.invalidateTag(`category:${categoryId}`);

// When inventory changes, invalidate all products
await cache.invalidateTag('products');

CDN Caching Strategies

Content Delivery Networks cache your content at edge locations worldwide, dramatically reducing latency for users. Here's how to leverage them effectively.

CDN Cache Best Practices

  • Static Assets: Use immutable caching with content hashing (/app.a1b2c3.js)
  • API Responses: Use s-maxage for CDN-specific TTL separate from browser cache
  • Personalized Content: Use Vary header or bypass CDN cache entirely
  • Cache Purging: Implement instant purge APIs for emergency updates
  • Stale-While-Revalidate: Serve stale content while fetching fresh in background
Cloudflare Workers - Edge Caching
export default {
  async fetch(request, env, ctx) {
    const url = new URL(request.url);
    
    // Check edge cache first
    const cacheKey = new Request(url.toString(), request);
    const cache = caches.default;
    
    let response = await cache.match(cacheKey);
    
    if (!response) {
      // Cache miss - fetch from origin
      response = await fetch(request);
      
      // Clone response for caching
      response = new Response(response.body, response);
      
      // Set caching headers based on content type
      if (url.pathname.startsWith('/api/')) {
        response.headers.set('Cache-Control', 's-maxage=60, stale-while-revalidate=300');
      } else if (url.pathname.match(/\.(js|css|woff2)$/)) {
        response.headers.set('Cache-Control', 'public, max-age=31536000, immutable');
      }
      
      // Store in edge cache (non-blocking)
      ctx.waitUntil(cache.put(cacheKey, response.clone()));
    }
    
    return response;
  }
};

Measuring Cache Performance

You can't improve what you don't measure. Here are the key metrics to track for your caching strategy.

Cache Hit Rate

Percentage of requests served from cache. Aim for >90% for static content, >70% for dynamic.

Response Time

Compare cached vs uncached response times. Cache should be 10-100x faster.

Time-To-First-Byte

Measures server response latency. CDN cache should reduce TTFB by 50%+.

Memory Usage

Monitor Redis/Memcached memory. Set eviction policies before hitting limits.

Redis Cache Monitoring
# Check cache statistics
redis-cli INFO stats

# Key metrics to monitor:
# - keyspace_hits: Number of successful key lookups
# - keyspace_misses: Number of failed key lookups
# - evicted_keys: Keys removed due to maxmemory policy

# Calculate hit rate
redis-cli INFO stats | grep keyspace
# Hit Rate = hits / (hits + misses) * 100

# Monitor memory usage
redis-cli INFO memory
# used_memory_human: Current memory usage
# maxmemory_human: Memory limit

# Check slowlog for slow operations
redis-cli SLOWLOG GET 10

Prometheus Metrics for Cache Monitoring

metrics.js
import { Counter, Histogram, Gauge } from 'prom-client';

// Cache hit/miss counter
const cacheHits = new Counter({
  name: 'cache_hits_total',
  help: 'Total number of cache hits',
  labelNames: ['cache_type', 'key_prefix']
});

const cacheMisses = new Counter({
  name: 'cache_misses_total',
  help: 'Total number of cache misses',
  labelNames: ['cache_type', 'key_prefix']
});

// Cache operation latency
const cacheLatency = new Histogram({
  name: 'cache_operation_duration_seconds',
  help: 'Cache operation latency in seconds',
  labelNames: ['operation', 'cache_type'],
  buckets: [0.001, 0.005, 0.01, 0.05, 0.1, 0.5]
});

// Memory usage gauge
const cacheMemory = new Gauge({
  name: 'cache_memory_bytes',
  help: 'Current cache memory usage in bytes'
});

// Instrumented cache wrapper
class InstrumentedCache {
  constructor(redis) {
    this.redis = redis;
  }
  
  async get(key) {
    const timer = cacheLatency.startTimer({ operation: 'get', cache_type: 'redis' });
    const prefix = key.split(':')[0];
    
    try {
      const value = await this.redis.get(key);
      
      if (value) {
        cacheHits.inc({ cache_type: 'redis', key_prefix: prefix });
      } else {
        cacheMisses.inc({ cache_type: 'redis', key_prefix: prefix });
      }
      
      return value ? JSON.parse(value) : null;
    } finally {
      timer();
    }
  }
}

Common Caching Pitfalls

Even experienced developers fall into these traps. Here's how to avoid them.

1

Cache Stampede (Thundering Herd)

When a popular cache key expires, hundreds of requests simultaneously hit the database. Solution: Use locking (single-flight) or stale-while-revalidate patterns.

2

Caching Nulls Without TTL

Caching "not found" results permanently can hide data that's added later. Solution: Use short TTLs for negative caching (30-60 seconds).

3

Inconsistent Cache Keys

Generating different cache keys for the same data leads to duplicate caching and stale data. Solution: Centralize cache key generation and normalize inputs.

4

Over-Caching User-Specific Data

Accidentally serving one user's cached data to another. Solution: Include user ID in cache keys or use Cache-Control: private.

Critical: Cache Poisoning

Never cache responses based on unvalidated user input. An attacker could poison your cache with malicious content that gets served to all users. Always sanitize cache keys and validate that cached responses are safe to serve.

Preventing Cache Stampede

single-flight.js
// Single-flight pattern: Only one request fetches, others wait
const inFlight = new Map();

async function getWithSingleFlight(key, fetchFn, ttl = 3600) {
  // Check cache first
  const cached = await redis.get(key);
  if (cached) return JSON.parse(cached);
  
  // Check if another request is already fetching
  if (inFlight.has(key)) {
    return inFlight.get(key);
  }
  
  // Create promise for this fetch
  const fetchPromise = (async () => {
    try {
      const data = await fetchFn();
      await redis.setex(key, ttl, JSON.stringify(data));
      return data;
    } finally {
      inFlight.delete(key);
    }
  })();
  
  inFlight.set(key, fetchPromise);
  return fetchPromise;
}

// With Redis locking for distributed systems
async function getWithLock(key, fetchFn, ttl = 3600) {
  const cached = await redis.get(key);
  if (cached) return JSON.parse(cached);
  
  const lockKey = `lock:${key}`;
  const lockTTL = 10; // 10 second lock
  
  // Try to acquire lock
  const acquired = await redis.set(lockKey, '1', 'EX', lockTTL, 'NX');
  
  if (acquired) {
    try {
      const data = await fetchFn();
      await redis.setex(key, ttl, JSON.stringify(data));
      return data;
    } finally {
      await redis.del(lockKey);
    }
  } else {
    // Wait and retry
    await new Promise(r => setTimeout(r, 100));
    return getWithLock(key, fetchFn, ttl);
  }
}

Caching Checklist

Use this checklist when implementing caching in your application:

Pre-Launch Caching Checklist

  • ☐ Static assets have content-based hashes and immutable headers
  • ☐ API responses have appropriate Cache-Control headers
  • ☐ Redis/Memcached has memory limits and eviction policy configured
  • ☐ Cache invalidation is triggered on all data mutation paths
  • ☐ Sensitive data uses Cache-Control: private or no-store
  • ☐ Cache keys include version/environment prefix
  • ☐ Monitoring for cache hit rate, memory usage, and latency
  • ☐ Cache stampede protection in place for hot keys
  • ☐ CDN purge mechanism tested and documented
  • ☐ Fallback behavior when cache is unavailable

Conclusion

Effective caching is a game-changer for web application performance. By implementing the right strategies at each layer—browser, CDN, application, and database—you can achieve sub-millisecond response times and handle massive traffic spikes gracefully.

Remember these key principles:

Start with the simple strategies (TTL-based expiration, HTTP caching headers), then add complexity only when needed. The best caching strategy is the one you can maintain and debug at 3 AM during a production incident.

Caching Redis Performance CDN Web Development
Mayur Dabhi

Mayur Dabhi

Full Stack Developer specializing in Laravel, React, and building scalable web applications. Passionate about performance optimization and clean code.