Redis & Advanced Caching

Redis Performance Optimization

20 min Lesson 23 of 30

Redis Performance Optimization

Redis is designed for high performance, but proper configuration and best practices are essential to maximize throughput, minimize latency, and optimize memory usage in production environments.

Memory Optimization Strategies

Memory is Redis's most critical resource. Efficient memory usage directly impacts performance and cost:

Memory Optimization Techniques:
  • Use appropriate data structures: Hashes for objects, Sets for unique items, Sorted Sets with scores
  • Enable compression: Use Redis's built-in encoding for small collections
  • Set expiration times: Prevent memory leaks with TTL
  • Use memory-efficient encodings: ziplist, intset, listpack
  • Avoid large keys: Split large values into smaller chunks
  • Monitor memory fragmentation: Restart Redis if fragmentation > 1.5
Check Memory Usage:
# Total memory usage
INFO memory

# Memory usage by key
MEMORY USAGE mykey

# Memory stats
used_memory_human: 2.50M
used_memory_peak_human: 3.12M
mem_fragmentation_ratio: 1.23

# Find largest keys
redis-cli --bigkeys

# Detailed memory analysis
redis-cli --memkeys --memkeys-samples 1000

Key Naming Conventions

Consistent key naming improves organization, performance, and memory efficiency:

Recommended Pattern:
# Format: object:id:field or namespace:object:id
user:1000:name
user:1000:email
user:1000:settings

session:abc123:data
cache:homepage:html
rate_limit:api:user:1000

# Use colons as separators (Redis convention)
# Namespaces help organize and pattern match
# Shorter keys = less memory usage
Memory Tip: Key names consume memory. A key named "u:1000:n" uses less memory than "user:1000:name". Balance readability with memory efficiency.

SCAN vs KEYS - Critical Performance Difference

Never use KEYS in production - it blocks Redis while scanning all keys:

❌ Bad - Blocks Redis:
KEYS user:*
# Blocks entire server until complete
# O(N) complexity where N = total keys

✅ Good - Non-blocking:
SCAN 0 MATCH user:* COUNT 100
# Returns cursor + batch of keys
# Non-blocking, can be paginated
# O(1) per call

# SCAN iteration example
cursor="0"
while true; do
result=$(redis-cli SCAN $cursor MATCH user:* COUNT 100)
cursor=$(echo "$result" | head -1)
keys=$(echo "$result" | tail -n +2)
echo "$keys"
[ "$cursor" = "0" ] && break
done
Critical Warning: KEYS command can block Redis for seconds on databases with millions of keys. Always use SCAN, SSCAN, HSCAN, or ZSCAN instead.

Pipeline Optimization

Pipelining reduces network round trips by batching multiple commands:

Without Pipeline (Slow):
# 1000 commands = 1000 round trips
for i in range(1000):
redis.set(f"key:{i}", f"value{i}")
# ~500ms with 0.5ms latency per command

With Pipeline (Fast):
# 1000 commands = 1 round trip
pipe = redis.pipeline()
for i in range(1000):
pipe.set(f"key:{i}", f"value{i}")
pipe.execute()
# ~10ms total
PHP Pipeline Example:
$redis = new Redis();
$redis->connect('127.0.0.1', 6379);

$pipe = $redis->multi(Redis::PIPELINE);
for ($i = 0; $i < 1000; $i++) {
$pipe->set("key:$i", "value$i");
}
$pipe->exec();

Node.js Pipeline Example:
const pipeline = redis.pipeline();
for (let i = 0; i < 1000; i++) {
pipeline.set(`key:${i}`, `value${i}`);
}
await pipeline.exec();
Pipeline Performance: Pipelining can improve throughput by 5-10x for bulk operations. Use it for batch inserts, bulk reads, or any scenario with multiple sequential commands.

Connection Pooling

Reuse connections instead of creating new ones for each request:

PHP Connection Pool (Laravel):
# config/database.php
'redis' => [
'client' => 'predis',
'options' => [
'cluster' => env('REDIS_CLUSTER', 'redis'),
'prefix' => env('REDIS_PREFIX', Str::slug(env('APP_NAME', 'laravel'), '_').'_database_'),
],
'default' => [
'host' => env('REDIS_HOST', '127.0.0.1'),
'password' => env('REDIS_PASSWORD'),
'port' => env('REDIS_PORT', 6379),
'database' => env('REDIS_DB', 0),
'persistent' => true, # Enable persistent connections
],
],
Node.js Connection Pool:
const Redis = require('ioredis');

const redis = new Redis({
host: '127.0.0.1',
port: 6379,
maxRetriesPerRequest: 3,
enableReadyCheck: true,
lazyConnect: false,
keepAlive: 30000 // Keep connections alive
});

# Reuse this connection throughout your app

Memory Analysis Tools

Identify memory bottlenecks and optimize storage:

redis-cli --bigkeys:
redis-cli --bigkeys
# Output:
# Biggest string found: "cache:homepage" has 2048576 bytes
# Biggest list found: "queue:jobs" has 50000 items
# Biggest hash found: "user:1000" has 1000 fields

MEMORY DOCTOR:
MEMORY DOCTOR
# Provides memory optimization suggestions

MEMORY STATS:
MEMORY STATS
# Detailed memory allocation statistics
Redis Memory Analyzer (RMA):
# Install RMA
pip install rma

# Generate memory report
rma --host 127.0.0.1 --port 6379 --types all

# Output: Detailed breakdown of memory by key pattern

Eviction Policies

Configure Redis to automatically remove keys when memory limit is reached:

redis.conf:
# Set maximum memory limit
maxmemory 2gb

# Eviction policy options:
maxmemory-policy allkeys-lru

# Policies:
# noeviction: Return error when memory limit reached
# allkeys-lru: Remove least recently used keys
# allkeys-lfu: Remove least frequently used keys
# allkeys-random: Remove random keys
# volatile-lru: Remove LRU keys with TTL set
# volatile-lfu: Remove LFU keys with TTL set
# volatile-random: Remove random keys with TTL
# volatile-ttl: Remove keys with shortest TTL
Policy Selection: Use allkeys-lru for caches where all keys are candidates for eviction. Use volatile-lru if only keys with TTL should be evicted.

Persistence Trade-offs

Balance durability with performance by choosing the right persistence strategy:

Performance Comparison:
# No persistence (fastest, no durability)
save ""
appendonly no
# Best for: Pure cache, temporary data

# RDB only (good performance, some data loss)
save 900 1
save 300 10
save 60 10000
appendonly no
# Best for: Tolerable data loss (minutes)

# AOF with fsync everysec (balanced)
appendonly yes
appendfsync everysec
# Best for: Production with minimal data loss

# AOF with fsync always (slowest, most durable)
appendonly yes
appendfsync always
# Best for: Critical data requiring no loss

Slow Query Logging

Identify and optimize slow commands:

Configure Slow Log:
# redis.conf
# Log queries slower than 10ms
slowlog-log-slower-than 10000

# Keep last 128 slow queries
slowlog-max-len 128

View Slow Log:
# Get last 10 slow queries
SLOWLOG GET 10

# Example output:
1) 1) (integer) 12
2) (integer) 1634567890
3) (integer) 15234
4) 1) "KEYS"
2) "user:*"

# Reset slow log
SLOWLOG RESET

Latency Monitoring

Track and diagnose latency spikes:

Enable Latency Monitoring:
# Monitor events taking >100ms
CONFIG SET latency-monitor-threshold 100

# View latency events
LATENCY LATEST
LATENCY HISTORY command
LATENCY DOCTOR

# Latency graph
LATENCY GRAPH command

Configuration Tuning

Optimize Redis configuration for your workload:

Performance Tuning:
# redis.conf

# Disable transparent huge pages (Linux)
# echo never > /sys/kernel/mm/transparent_hugepage/enabled

# TCP backlog
tcp-backlog 511

# Max clients
maxclients 10000

# Timeout idle connections
timeout 300

# TCP keepalive
tcp-keepalive 300

# Faster replication
repl-diskless-sync yes
repl-diskless-sync-delay 5

# Lazy freeing (non-blocking deletes)
lazyfree-lazy-eviction yes
lazyfree-lazy-expire yes
lazyfree-lazy-server-del yes
Lazy Freeing: Enables non-blocking deletion of keys in background threads, preventing blocking on large key deletions (big lists, hashes, sets).

Best Practices Summary

Performance Checklist:
  • ✅ Use pipelining for bulk operations
  • ✅ Use SCAN instead of KEYS
  • ✅ Enable connection pooling
  • ✅ Set appropriate TTLs to prevent memory leaks
  • ✅ Use efficient data structures (Hashes > Strings for objects)
  • ✅ Monitor memory with --bigkeys
  • ✅ Configure maxmemory and eviction policy
  • ✅ Enable lazy freeing for non-blocking deletes
  • ✅ Monitor slow log and optimize slow queries
  • ✅ Use shorter key names to save memory
  • ✅ Disable persistence for pure cache workloads
  • ✅ Use AOF with everysec for durability
Exercise: Analyze a Redis instance: 1) Run redis-cli --bigkeys to find largest keys, 2) Check SLOWLOG for slow queries, 3) Verify mem_fragmentation_ratio with INFO memory, 4) Test pipeline performance vs sequential commands with 10,000 SET operations and measure time difference.