Introduction to Redis for Caching
Every millisecond counts in web development. When your application repeatedly hits a database for the same data — user profiles, product listings, configuration values — you're burning compute cycles and adding latency that your users can feel. Redis fixes this. It's an open-source, in-memory data structure store that acts as a lightning-fast cache sitting between your application and its slower data sources. In this guide, you'll learn Redis from the ground up: how it works, its data types, proven caching patterns, and how to wire it into real Node.js and Laravel applications.
Redis processes over 100,000 read/write operations per second on commodity hardware. A typical MySQL query takes 1–10ms; the same data from Redis returns in under 0.1ms. For a page that triggers 20 database queries, Redis can shave hundreds of milliseconds off every response.
What Is Redis?
Redis (Remote Dictionary Server) was created by Salvatore Sanfilippo in 2009. Unlike traditional databases that persist data to disk and read from it on every query, Redis stores everything in RAM. This is what makes it so fast — memory access is orders of magnitude quicker than disk I/O.
But Redis is more than a simple key-value store. It supports rich data structures (strings, hashes, lists, sets, sorted sets, and more), optional persistence to disk, pub/sub messaging, Lua scripting, and built-in replication and clustering. This makes it useful far beyond caching — for session storage, rate limiting, leaderboards, message queues, and real-time analytics.
Here's what sets Redis apart from alternatives:
- Data structure awareness: Redis understands lists, sets, and hashes — you don't just store blobs, you manipulate data server-side
- Atomic operations: Commands like
INCR,LPUSH, andZADDare atomic — safe under concurrency without locks - Expiry built in: Every key can have a TTL (time to live) set with a single command
- Single-threaded core: Redis processes commands sequentially, eliminating race conditions at the engine level
- Persistence options: RDB snapshots and AOF (append-only file) logging let you survive restarts
| Feature | Redis | Memcached | MySQL (cache table) |
|---|---|---|---|
| Data types | Strings, Hashes, Lists, Sets, Sorted Sets, Streams | Strings only | All SQL types |
| Max value size | 512 MB | 1 MB | Unlimited |
| Persistence | RDB + AOF | None | Full |
| Pub/Sub | Yes | No | No |
| Lua scripting | Yes | No | Stored procedures |
| Typical latency | <0.1ms | <0.1ms | 1–10ms |
Installing and Connecting to Redis
Getting Redis running locally takes just a few minutes. Choose whichever method fits your environment best.
Install Redis (Linux/macOS)
Use your package manager or download from redis.io. On Ubuntu: sudo apt install redis-server. On macOS with Homebrew: brew install redis.
Run with Docker (recommended for dev)
Spin up a Redis container without any local installation. Docker ensures your dev environment matches production exactly.
# Pull and run Redis with a persistent volume
docker run -d \
--name redis-cache \
-p 6379:6379 \
-v redis-data:/data \
redis:7-alpine redis-server --appendonly yes
# Verify it's running
docker exec -it redis-cache redis-cli ping
# Output: PONG
Explore with the Redis CLI
Connect to your Redis instance with redis-cli to try commands interactively before writing application code.
redis-cli
# Set a string key with 60-second TTL
SET user:1001 '{"name":"Alice","role":"admin"}' EX 60
# Get it back
GET user:1001
# Check remaining TTL (seconds)
TTL user:1001
# Delete a key
DEL user:1001
# List all keys matching a pattern (avoid in production on large datasets)
KEYS user:*
# Check how many keys exist
DBSIZE
# Flush the current database (careful!)
FLUSHDB
The KEYS pattern command scans the entire keyspace and blocks Redis while it runs. On a database with millions of keys, this can freeze your application for seconds. Use SCAN with a cursor for non-blocking iteration instead.
Redis Data Types for Caching
Choosing the right Redis data type for your use case affects both performance and the complexity of your application code. Here's a practical guide to the most useful types for caching scenarios.
Strings are the workhorse of Redis caching. Store JSON-serialized objects, HTML fragments, API responses, or counters. Every string key can hold up to 512 MB.
# Cache a serialized API response
SET api:products:page1 '{"data":[...],"total":150}' EX 300
# Atomic counter (no race conditions)
INCR page:views:home
INCRBY cart:total:user42 25
# Get and set in one command (update atomically)
GETSET session:token:abc123 "new-token-value"
# Set multiple keys at once
MSET user:1:name "Alice" user:1:email "alice@example.com"
MGET user:1:name user:1:email
Hashes map field names to values within a single key — perfect for caching objects where you often need just one field rather than the whole serialized blob.
# Store a user object as a hash
HSET user:1001 name "Alice" email "alice@example.com" role "admin" plan "pro"
# Read individual field (no deserialization needed)
HGET user:1001 email
# "alice@example.com"
# Read multiple fields
HMGET user:1001 name plan
# 1) "Alice"
# 2) "pro"
# Read the entire object
HGETALL user:1001
# Update a single field without overwriting the rest
HSET user:1001 plan "enterprise"
# Check if a field exists
HEXISTS user:1001 email
# Delete a field
HDEL user:1001 role
Lists are doubly-linked sequences. Use them for recent activity feeds, job queues, or paginated results where you frequently push to one end and pop from the other.
# Push recent product views (newest first)
LPUSH user:1001:viewed product:42 product:17 product:88
# Trim to last 20 items automatically
LTRIM user:1001:viewed 0 19
# Read the 10 most recent
LRANGE user:1001:viewed 0 9
# Pop from a queue (blocks until item is available)
BLPOP job:queue:email 5
# Waits up to 5 seconds for a new email job
# Peek at the first element without removing
LINDEX user:1001:viewed 0
Sorted Sets associate a floating-point score with each member, keeping elements ordered. Perfect for leaderboards, rate limiting windows, and expiring session indexes.
# Add items with scores (e.g., timestamp for rate limiting)
ZADD api:hits:user42 1715300000 "req:001"
ZADD api:hits:user42 1715300005 "req:002"
# Count requests in the last 60 seconds
ZCOUNT api:hits:user42 (now-60) now
# Leaderboard: add scores
ZADD leaderboard 9850 "alice" 7200 "bob" 11000 "carol"
# Top 3 players (highest score first)
ZREVRANGE leaderboard 0 2 WITHSCORES
# 1) "carol" 2) "11000"
# 3) "alice" 4) "9850"
# 5) "bob" 6) "7200"
# User's rank (0-based)
ZREVRANK leaderboard "alice"
# 1 (second place)
Core Caching Patterns
Knowing Redis commands is only half the battle. The other half is deciding when to read from cache and when to write to it. Three patterns cover the vast majority of real-world use cases.
Cache-Aside (Lazy Loading)
The application is responsible for loading data into the cache. On a read, it checks Redis first. On a miss, it fetches from the database and populates the cache. This is the most widely used pattern because it only caches data that is actually requested — you don't waste memory caching things nobody reads.
On a cache hit, data returns from Redis. On a miss, the app fetches from DB and populates the cache.
Write-Through
Every write to the database also synchronously updates the cache. The cache is never stale because it's written at the same time as the source of truth. The downside: you cache data that may never be read, and every write pays a double cost.
Write-Behind (Write-Back)
Writes go to Redis immediately and are flushed to the database asynchronously in batches. This yields the highest write throughput but risks data loss if Redis crashes before the flush completes. Use it for metrics, analytics, and counters — not for financial transactions.
Pattern Comparison: When to Use Each
| Pattern | Read Performance | Write Performance | Data Freshness | Best For |
|---|---|---|---|---|
| Cache-Aside | Fast after warm-up | Normal | Eventual (TTL) | Read-heavy workloads |
| Write-Through | Always fast | Slower (double write) | Always fresh | Data that is read after write |
| Write-Behind | Always fast | Fastest | Eventual | High-frequency counters, analytics |
Redis with Node.js
The ioredis library is the recommended Redis client for Node.js. It supports promises, pipelining, clustering, and Lua scripting out of the box.
npm install ioredis
import Redis from 'ioredis';
const redis = new Redis({
host: process.env.REDIS_HOST || '127.0.0.1',
port: process.env.REDIS_PORT || 6379,
password: process.env.REDIS_PASSWORD || undefined,
db: 0,
maxRetriesPerRequest: 3,
retryStrategy: (times) => Math.min(times * 50, 2000),
});
redis.on('connect', () => console.log('Redis connected'));
redis.on('error', (err) => console.error('Redis error:', err));
export default redis;
Implementing Cache-Aside in Node.js
The following example wraps an expensive database call with a Redis cache layer. The first request takes the full DB hit; subsequent requests within the TTL window return instantly from memory.
import redis from '../lib/redis.js';
import db from '../lib/database.js';
const CACHE_TTL = 300; // 5 minutes
export async function getProducts(page = 1, limit = 20) {
const cacheKey = `products:page:${page}:limit:${limit}`;
// 1. Check cache
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}
// 2. Cache miss — query the database
const offset = (page - 1) * limit;
const products = await db.query(
'SELECT * FROM products WHERE active = 1 ORDER BY created_at DESC LIMIT ? OFFSET ?',
[limit, offset]
);
// 3. Populate cache
await redis.set(cacheKey, JSON.stringify(products), 'EX', CACHE_TTL);
return products;
}
// Invalidate the cache when a product is updated
export async function updateProduct(id, data) {
await db.query('UPDATE products SET ? WHERE id = ?', [data, id]);
// Delete all product page caches (pattern-based invalidation)
const keys = await redis.keys('products:page:*');
if (keys.length > 0) {
await redis.del(...keys);
}
}
import express from 'express';
import { getProducts, updateProduct } from '../services/productService.js';
const router = express.Router();
router.get('/', async (req, res) => {
const { page = 1, limit = 20 } = req.query;
const products = await getProducts(Number(page), Number(limit));
res.json(products);
});
router.put('/:id', async (req, res) => {
await updateProduct(req.params.id, req.body);
res.json({ success: true });
});
export default router;
In the example above, redis.keys('products:page:*') is fine for development but blocks on large keyspaces. In production, replace it with redis.scan in a loop to iterate without blocking, or use tagged caching with a set that tracks related keys.
Redis with Laravel
Laravel has first-class Redis support. The Cache facade provides a clean API that works identically whether your cache driver is Redis, Memcached, or file-based — swapping the driver requires only a config change.
Install the Predis Client
Laravel supports both Predis (pure PHP) and PhpRedis (C extension). Predis requires no extension installation and is recommended for most projects.
composer require predis/predis
Configure the .env File
Set the cache driver to Redis and provide your connection details. Laravel reads these automatically through its config system.
CACHE_DRIVER=redis
SESSION_DRIVER=redis
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
Using the Cache Facade
<?php
namespace App\Http\Controllers;
use App\Models\Product;
use Illuminate\Support\Facades\Cache;
class ProductController extends Controller
{
public function index()
{
// remember(): get from cache or run closure and cache result
$products = Cache::remember('products.all', now()->addMinutes(10), function () {
return Product::with('category')
->where('active', true)
->orderBy('created_at', 'desc')
->get();
});
return response()->json($products);
}
public function show(Product $product)
{
$key = "product.{$product->id}";
$data = Cache::remember($key, now()->addHour(), function () use ($product) {
return $product->load(['category', 'reviews', 'tags']);
});
return response()->json($data);
}
public function update(Request $request, Product $product)
{
$product->update($request->validated());
// Invalidate related cache keys
Cache::forget("product.{$product->id}");
Cache::forget('products.all');
return response()->json($product);
}
}
Tagged Caching
Tags let you group related cache entries so you can flush them all at once with a single call — no pattern matching required, and no risk of accidentally deleting unrelated keys.
// Cache with tags
Cache::tags(['products', 'category:electronics'])->remember(
'products.electronics.page1',
now()->addMinutes(30),
fn() => Product::byCategory('electronics')->paginate(20)
);
Cache::tags(['products', 'category:clothing'])->remember(
'products.clothing.page1',
now()->addMinutes(30),
fn() => Product::byCategory('clothing')->paginate(20)
);
// Flush all product caches (both electronics and clothing)
Cache::tags(['products'])->flush();
// Flush only electronics caches
Cache::tags(['category:electronics'])->flush();
Expiration and Eviction Strategies
Setting the right TTL (time to live) is as important as choosing the right data type. Too short and you thrash the database; too long and users see stale data. Redis also needs a memory eviction policy for when it runs out of space.
Setting Expiry
# Set key with TTL in seconds (atomic)
SET user:session:abc123 "{...}" EX 3600
# Set TTL on an existing key
EXPIRE user:profile:1001 1800
# Set expiry as a Unix timestamp
EXPIREAT config:flags 1735689600
# Check remaining TTL
TTL user:session:abc123
# -1 = no expiry, -2 = key doesn't exist
# Remove expiry (make persistent)
PERSIST user:config:global
Eviction Policies
When Redis reaches its maxmemory limit, it needs to decide which keys to evict to make room. Configure this in redis.conf or via the CLI with CONFIG SET maxmemory-policy.
| Policy | Behaviour | Best For |
|---|---|---|
noeviction |
Return error on new writes when full | Critical data, never cache-only |
allkeys-lru |
Evict least recently used from all keys | General-purpose cache (recommended) |
volatile-lru |
Evict LRU only from keys with expiry set | Mix of cached and persistent data |
allkeys-lfu |
Evict least frequently used from all keys | Uneven access patterns (skewed reads) |
volatile-ttl |
Evict keys closest to expiry first | When you want shortest-lived data evicted |
allkeys-random |
Evict random keys | Uniform access pattern (rarely best choice) |
# Maximum memory Redis can use
maxmemory 512mb
# Evict least recently used keys from all keys (best for a pure cache)
maxmemory-policy allkeys-lru
# LRU sample size (higher = more accurate, slightly more CPU)
maxmemory-samples 10
# Disable persistence if Redis is cache-only (reduces I/O)
save ""
appendonly no
Avoiding Cache Stampedes
A cache stampede happens when a popular key expires and hundreds of concurrent requests all miss the cache simultaneously, each triggering a separate database query. This can overwhelm your database. Two solutions:
- Mutex locking: Only one process fetches from DB when the cache misses; others wait for the lock to be released and then read the freshly populated cache
- Probabilistic early expiration: A key that is about to expire has a small probability of being refreshed early, spreading the refresh load over time rather than concentrating it at expiry
async function getWithLock(key, fetchFn, ttl = 300) {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const lockKey = `lock:${key}`;
// NX = only set if not exists, PX = expire in milliseconds
const acquired = await redis.set(lockKey, '1', 'NX', 'PX', 5000);
if (acquired) {
try {
const data = await fetchFn();
await redis.set(key, JSON.stringify(data), 'EX', ttl);
return data;
} finally {
await redis.del(lockKey);
}
} else {
// Another process holds the lock — wait briefly and retry
await new Promise((r) => setTimeout(r, 100));
return getWithLock(key, fetchFn, ttl);
}
}
Key Takeaways
Redis is one of the highest-leverage tools in a web developer's toolkit. A few hours of setup can reduce your database load by 80–90% for read-heavy workloads, dramatically improve response times, and free your database to handle writes and complex queries it's actually good at.
What We Covered
- Redis vs alternatives: In-memory speed, rich data types, and optional persistence make Redis superior to Memcached for most use cases
- Data types: Use Strings for serialized objects, Hashes for partial field access, Lists for feeds and queues, Sorted Sets for leaderboards and rate limiting
- Caching patterns: Cache-Aside for read-heavy apps, Write-Through for strong consistency, Write-Behind for maximum write throughput
- Node.js with ioredis: Wrap DB calls in a cache check; invalidate keys on mutation
- Laravel Cache facade:
Cache::remember(),Cache::forget(), and tagged caching for grouped invalidation - Eviction: Set
maxmemoryand chooseallkeys-lrufor a dedicated cache instance - Stampede prevention: Mutex locks or probabilistic early expiration keep your database safe under high concurrency
"The fastest query is the one you never make."
— Engineering wisdom
Start with Cache-Aside on your most expensive queries, measure the hit rate with INFO stats (watch keyspace_hits vs keyspace_misses), and tune your TTLs from there. Redis rewards an iterative approach — you don't need to cache everything on day one.